Test Report: Docker_macOS 16890

                    
                      dc702cb3cbb2bfe371541339d66d19e451f60279:2023-07-17:30187
                    
                

Test fail (15/317)

x
+
TestErrorSpam/setup (21.88s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -p nospam-590000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-590000 --driver=docker 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -p nospam-590000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-590000 --driver=docker : (21.875570205s)
error_spam_test.go:96: unexpected stderr: "! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.31.0 -> Actual minikube version: v1.30.1"
error_spam_test.go:110: minikube stdout:
* [nospam-590000] minikube v1.30.1 on Darwin 13.4.1
- MINIKUBE_LOCATION=16890
- KUBECONFIG=/Users/jenkins/minikube-integration/16890-37879/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/16890-37879/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the docker driver based on user configuration
* Using Docker Desktop driver with root privileges
* Starting control plane node nospam-590000 in cluster nospam-590000
* Pulling base image ...
* Creating docker container (CPUs=2, Memory=2250MB) ...
* Preparing Kubernetes v1.27.3 on Docker 24.0.4 ...
- Generating certificates and keys ...
- Booting up control plane ...
- Configuring RBAC rules ...
* Configuring CNI (Container Networking Interface) ...
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
* Verifying Kubernetes components...
* Enabled addons: storage-provisioner, default-storageclass
* Done! kubectl is now configured to use "nospam-590000" cluster and "default" namespace by default
error_spam_test.go:111: minikube stderr:
! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.31.0 -> Actual minikube version: v1.30.1
--- FAIL: TestErrorSpam/setup (21.88s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (268.53s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-darwin-amd64 start -p ingress-addon-legacy-476000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker 
E0717 12:55:17.334883   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/addons-702000/client.crt: no such file or directory
E0717 12:57:33.495081   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/addons-702000/client.crt: no such file or directory
E0717 12:57:36.358983   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/functional-625000/client.crt: no such file or directory
E0717 12:57:36.365500   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/functional-625000/client.crt: no such file or directory
E0717 12:57:36.377765   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/functional-625000/client.crt: no such file or directory
E0717 12:57:36.399918   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/functional-625000/client.crt: no such file or directory
E0717 12:57:36.440702   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/functional-625000/client.crt: no such file or directory
E0717 12:57:36.521405   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/functional-625000/client.crt: no such file or directory
E0717 12:57:36.683583   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/functional-625000/client.crt: no such file or directory
E0717 12:57:37.005810   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/functional-625000/client.crt: no such file or directory
E0717 12:57:37.646611   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/functional-625000/client.crt: no such file or directory
E0717 12:57:38.928864   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/functional-625000/client.crt: no such file or directory
E0717 12:57:41.489866   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/functional-625000/client.crt: no such file or directory
E0717 12:57:46.610667   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/functional-625000/client.crt: no such file or directory
E0717 12:57:56.852001   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/functional-625000/client.crt: no such file or directory
E0717 12:58:01.178879   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/addons-702000/client.crt: no such file or directory
E0717 12:58:17.332672   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/functional-625000/client.crt: no such file or directory
E0717 12:58:58.293590   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/functional-625000/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p ingress-addon-legacy-476000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker : exit status 109 (4m28.48841771s)

                                                
                                                
-- stdout --
	* [ingress-addon-legacy-476000] minikube v1.30.1 on Darwin 13.4.1
	  - MINIKUBE_LOCATION=16890
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16890-37879/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16890-37879/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node ingress-addon-legacy-476000 in cluster ingress-addon-legacy-476000
	* Pulling base image ...
	* Downloading Kubernetes v1.18.20 preload ...
	* Creating docker container (CPUs=2, Memory=4096MB) ...
	* Preparing Kubernetes v1.18.20 on Docker 24.0.4 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 12:55:11.636525   41070 out.go:296] Setting OutFile to fd 1 ...
	I0717 12:55:11.636703   41070 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 12:55:11.636710   41070 out.go:309] Setting ErrFile to fd 2...
	I0717 12:55:11.636714   41070 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 12:55:11.636892   41070 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/16890-37879/.minikube/bin
	I0717 12:55:11.638468   41070 out.go:303] Setting JSON to false
	I0717 12:55:11.657730   41070 start.go:128] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":14082,"bootTime":1689609629,"procs":400,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0717 12:55:11.657823   41070 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0717 12:55:11.679103   41070 out.go:177] * [ingress-addon-legacy-476000] minikube v1.30.1 on Darwin 13.4.1
	I0717 12:55:11.721008   41070 notify.go:220] Checking for updates...
	I0717 12:55:11.742322   41070 out.go:177]   - MINIKUBE_LOCATION=16890
	I0717 12:55:11.763277   41070 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16890-37879/kubeconfig
	I0717 12:55:11.784173   41070 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0717 12:55:11.805328   41070 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 12:55:11.826324   41070 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16890-37879/.minikube
	I0717 12:55:11.846987   41070 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 12:55:11.868679   41070 driver.go:373] Setting default libvirt URI to qemu:///system
	I0717 12:55:11.924707   41070 docker.go:121] docker version: linux-24.0.2:Docker Desktop 4.21.1 (114176)
	I0717 12:55:11.924832   41070 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 12:55:12.020243   41070 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:false NGoroutines:63 SystemTime:2023-07-17 19:55:12.009584826 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:5.15.49-linuxkit-pr OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexS
erverAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=built
in name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Pat
h:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.6] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Command line tool for Docker Scout Vendor:Docker Inc. Version:0.16.1]] Warnings:<n
il>}}
	I0717 12:55:12.041978   41070 out.go:177] * Using the docker driver based on user configuration
	I0717 12:55:12.063652   41070 start.go:298] selected driver: docker
	I0717 12:55:12.063675   41070 start.go:880] validating driver "docker" against <nil>
	I0717 12:55:12.063690   41070 start.go:891] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 12:55:12.067720   41070 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 12:55:12.163160   41070 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:false NGoroutines:63 SystemTime:2023-07-17 19:55:12.152983801 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:5.15.49-linuxkit-pr OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexS
erverAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=built
in name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Pat
h:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.6] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Command line tool for Docker Scout Vendor:Docker Inc. Version:0.16.1]] Warnings:<n
il>}}
	I0717 12:55:12.163336   41070 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0717 12:55:12.163523   41070 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 12:55:12.185230   41070 out.go:177] * Using Docker Desktop driver with root privileges
	I0717 12:55:12.207094   41070 cni.go:84] Creating CNI manager for ""
	I0717 12:55:12.207131   41070 cni.go:168] CNI unnecessary in this configuration, recommending no CNI
	I0717 12:55:12.207148   41070 start_flags.go:319] config:
	{Name:ingress-addon-legacy-476000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-476000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:doc
ker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 12:55:12.250060   41070 out.go:177] * Starting control plane node ingress-addon-legacy-476000 in cluster ingress-addon-legacy-476000
	I0717 12:55:12.271159   41070 cache.go:122] Beginning downloading kic base image for docker with docker
	I0717 12:55:12.292074   41070 out.go:177] * Pulling base image ...
	I0717 12:55:12.334211   41070 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon
	I0717 12:55:12.334219   41070 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0717 12:55:12.384403   41070 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon, skipping pull
	I0717 12:55:12.384428   41070 cache.go:145] gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 exists in daemon, skipping load
	I0717 12:55:12.419027   41070 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
	I0717 12:55:12.419053   41070 cache.go:57] Caching tarball of preloaded images
	I0717 12:55:12.419403   41070 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0717 12:55:12.440924   41070 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0717 12:55:12.461818   41070 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I0717 12:55:12.673471   41070 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4?checksum=md5:ff35f06d4f6c0bac9297b8f85d8ebf70 -> /Users/jenkins/minikube-integration/16890-37879/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
	I0717 12:55:22.940959   41070 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I0717 12:55:22.941207   41070 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/16890-37879/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I0717 12:55:23.555965   41070 cache.go:60] Finished verifying existence of preloaded tar for  v1.18.20 on docker
	I0717 12:55:23.556301   41070 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/ingress-addon-legacy-476000/config.json ...
	I0717 12:55:23.556327   41070 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/ingress-addon-legacy-476000/config.json: {Name:mkead7815cf768d610165064478b4dca35a0d086 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 12:55:23.556636   41070 cache.go:195] Successfully downloaded all kic artifacts
	I0717 12:55:23.556662   41070 start.go:365] acquiring machines lock for ingress-addon-legacy-476000: {Name:mk7c06554c933ed54872ded243db5f655426f7c9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 12:55:23.556839   41070 start.go:369] acquired machines lock for "ingress-addon-legacy-476000" in 169.284µs
	I0717 12:55:23.556860   41070 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-476000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-476000 Namespace:default APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disabl
eMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0717 12:55:23.556971   41070 start.go:125] createHost starting for "" (driver="docker")
	I0717 12:55:23.582856   41070 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0717 12:55:23.583158   41070 start.go:159] libmachine.API.Create for "ingress-addon-legacy-476000" (driver="docker")
	I0717 12:55:23.583205   41070 client.go:168] LocalClient.Create starting
	I0717 12:55:23.583392   41070 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16890-37879/.minikube/certs/ca.pem
	I0717 12:55:23.583464   41070 main.go:141] libmachine: Decoding PEM data...
	I0717 12:55:23.583495   41070 main.go:141] libmachine: Parsing certificate...
	I0717 12:55:23.583618   41070 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16890-37879/.minikube/certs/cert.pem
	I0717 12:55:23.583667   41070 main.go:141] libmachine: Decoding PEM data...
	I0717 12:55:23.583682   41070 main.go:141] libmachine: Parsing certificate...
	I0717 12:55:23.584601   41070 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-476000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0717 12:55:23.637966   41070 cli_runner.go:211] docker network inspect ingress-addon-legacy-476000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0717 12:55:23.638113   41070 network_create.go:281] running [docker network inspect ingress-addon-legacy-476000] to gather additional debugging logs...
	I0717 12:55:23.638144   41070 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-476000
	W0717 12:55:23.687661   41070 cli_runner.go:211] docker network inspect ingress-addon-legacy-476000 returned with exit code 1
	I0717 12:55:23.687698   41070 network_create.go:284] error running [docker network inspect ingress-addon-legacy-476000]: docker network inspect ingress-addon-legacy-476000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ingress-addon-legacy-476000 not found
	I0717 12:55:23.687715   41070 network_create.go:286] output of [docker network inspect ingress-addon-legacy-476000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ingress-addon-legacy-476000 not found
	
	** /stderr **
	I0717 12:55:23.687813   41070 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0717 12:55:23.737116   41070 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0004abe50}
	I0717 12:55:23.737156   41070 network_create.go:123] attempt to create docker network ingress-addon-legacy-476000 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 65535 ...
	I0717 12:55:23.737229   41070 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-476000 ingress-addon-legacy-476000
	I0717 12:55:23.818266   41070 network_create.go:107] docker network ingress-addon-legacy-476000 192.168.49.0/24 created
	I0717 12:55:23.818303   41070 kic.go:117] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-476000" container
	I0717 12:55:23.818418   41070 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0717 12:55:23.867588   41070 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-476000 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-476000 --label created_by.minikube.sigs.k8s.io=true
	I0717 12:55:23.917545   41070 oci.go:103] Successfully created a docker volume ingress-addon-legacy-476000
	I0717 12:55:23.917684   41070 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-476000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-476000 --entrypoint /usr/bin/test -v ingress-addon-legacy-476000:/var gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -d /var/lib
	I0717 12:55:24.282773   41070 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-476000
	I0717 12:55:24.282810   41070 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0717 12:55:24.282823   41070 kic.go:190] Starting extracting preloaded images to volume ...
	I0717 12:55:24.282938   41070 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/16890-37879/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-476000:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -I lz4 -xf /preloaded.tar -C /extractDir
	I0717 12:55:27.220464   41070 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/16890-37879/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-476000:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -I lz4 -xf /preloaded.tar -C /extractDir: (2.93742407s)
	I0717 12:55:27.220493   41070 kic.go:199] duration metric: took 2.937633 seconds to extract preloaded images to volume
	I0717 12:55:27.220612   41070 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0717 12:55:27.317205   41070 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-476000 --name ingress-addon-legacy-476000 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-476000 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-476000 --network ingress-addon-legacy-476000 --ip 192.168.49.2 --volume ingress-addon-legacy-476000:/var --security-opt apparmor=unconfined --memory=4096mb --memory-swap=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631
	I0717 12:55:27.579601   41070 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-476000 --format={{.State.Running}}
	I0717 12:55:27.630236   41070 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-476000 --format={{.State.Status}}
	I0717 12:55:27.683754   41070 cli_runner.go:164] Run: docker exec ingress-addon-legacy-476000 stat /var/lib/dpkg/alternatives/iptables
	I0717 12:55:27.777696   41070 oci.go:144] the created container "ingress-addon-legacy-476000" has a running status.
	I0717 12:55:27.777725   41070 kic.go:221] Creating ssh key for kic: /Users/jenkins/minikube-integration/16890-37879/.minikube/machines/ingress-addon-legacy-476000/id_rsa...
	I0717 12:55:27.962888   41070 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16890-37879/.minikube/machines/ingress-addon-legacy-476000/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0717 12:55:27.962970   41070 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/16890-37879/.minikube/machines/ingress-addon-legacy-476000/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0717 12:55:28.023935   41070 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-476000 --format={{.State.Status}}
	I0717 12:55:28.075273   41070 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0717 12:55:28.075294   41070 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-476000 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0717 12:55:28.165574   41070 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-476000 --format={{.State.Status}}
	I0717 12:55:28.215232   41070 machine.go:88] provisioning docker machine ...
	I0717 12:55:28.215279   41070 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-476000"
	I0717 12:55:28.215381   41070 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-476000
	I0717 12:55:28.265683   41070 main.go:141] libmachine: Using SSH client type: native
	I0717 12:55:28.266092   41070 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140d2c0] 0x1410360 <nil>  [] 0s} 127.0.0.1 55976 <nil> <nil>}
	I0717 12:55:28.266107   41070 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-476000 && echo "ingress-addon-legacy-476000" | sudo tee /etc/hostname
	I0717 12:55:28.404301   41070 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-476000
	
	I0717 12:55:28.404394   41070 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-476000
	I0717 12:55:28.455174   41070 main.go:141] libmachine: Using SSH client type: native
	I0717 12:55:28.455531   41070 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140d2c0] 0x1410360 <nil>  [] 0s} 127.0.0.1 55976 <nil> <nil>}
	I0717 12:55:28.455547   41070 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-476000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-476000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-476000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 12:55:28.585367   41070 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 12:55:28.585395   41070 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/16890-37879/.minikube CaCertPath:/Users/jenkins/minikube-integration/16890-37879/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/16890-37879/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/16890-37879/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/16890-37879/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/16890-37879/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/16890-37879/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/16890-37879/.minikube}
	I0717 12:55:28.585415   41070 ubuntu.go:177] setting up certificates
	I0717 12:55:28.585428   41070 provision.go:83] configureAuth start
	I0717 12:55:28.585517   41070 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-476000
	I0717 12:55:28.634925   41070 provision.go:138] copyHostCerts
	I0717 12:55:28.634985   41070 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16890-37879/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/16890-37879/.minikube/ca.pem
	I0717 12:55:28.635049   41070 exec_runner.go:144] found /Users/jenkins/minikube-integration/16890-37879/.minikube/ca.pem, removing ...
	I0717 12:55:28.635056   41070 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/16890-37879/.minikube/ca.pem
	I0717 12:55:28.635214   41070 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16890-37879/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/16890-37879/.minikube/ca.pem (1078 bytes)
	I0717 12:55:28.635388   41070 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16890-37879/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/16890-37879/.minikube/cert.pem
	I0717 12:55:28.635430   41070 exec_runner.go:144] found /Users/jenkins/minikube-integration/16890-37879/.minikube/cert.pem, removing ...
	I0717 12:55:28.635435   41070 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/16890-37879/.minikube/cert.pem
	I0717 12:55:28.635500   41070 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16890-37879/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/16890-37879/.minikube/cert.pem (1123 bytes)
	I0717 12:55:28.635624   41070 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16890-37879/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/16890-37879/.minikube/key.pem
	I0717 12:55:28.635669   41070 exec_runner.go:144] found /Users/jenkins/minikube-integration/16890-37879/.minikube/key.pem, removing ...
	I0717 12:55:28.635676   41070 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/16890-37879/.minikube/key.pem
	I0717 12:55:28.635743   41070 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16890-37879/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/16890-37879/.minikube/key.pem (1679 bytes)
	I0717 12:55:28.635901   41070 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/16890-37879/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/16890-37879/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/16890-37879/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-476000 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-476000]
	I0717 12:55:28.788372   41070 provision.go:172] copyRemoteCerts
	I0717 12:55:28.788444   41070 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 12:55:28.788499   41070 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-476000
	I0717 12:55:28.839945   41070 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55976 SSHKeyPath:/Users/jenkins/minikube-integration/16890-37879/.minikube/machines/ingress-addon-legacy-476000/id_rsa Username:docker}
	I0717 12:55:28.933269   41070 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16890-37879/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0717 12:55:28.933355   41070 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16890-37879/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 12:55:28.954140   41070 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16890-37879/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0717 12:55:28.954222   41070 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16890-37879/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0717 12:55:28.975163   41070 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16890-37879/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0717 12:55:28.975239   41070 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16890-37879/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 12:55:28.996331   41070 provision.go:86] duration metric: configureAuth took 410.883742ms
	I0717 12:55:28.996345   41070 ubuntu.go:193] setting minikube options for container-runtime
	I0717 12:55:28.996520   41070 config.go:182] Loaded profile config "ingress-addon-legacy-476000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0717 12:55:28.996596   41070 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-476000
	I0717 12:55:29.046167   41070 main.go:141] libmachine: Using SSH client type: native
	I0717 12:55:29.046562   41070 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140d2c0] 0x1410360 <nil>  [] 0s} 127.0.0.1 55976 <nil> <nil>}
	I0717 12:55:29.046579   41070 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0717 12:55:29.175495   41070 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0717 12:55:29.175507   41070 ubuntu.go:71] root file system type: overlay
	I0717 12:55:29.175586   41070 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0717 12:55:29.175671   41070 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-476000
	I0717 12:55:29.225265   41070 main.go:141] libmachine: Using SSH client type: native
	I0717 12:55:29.225628   41070 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140d2c0] 0x1410360 <nil>  [] 0s} 127.0.0.1 55976 <nil> <nil>}
	I0717 12:55:29.225693   41070 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0717 12:55:29.363859   41070 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0717 12:55:29.363967   41070 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-476000
	I0717 12:55:29.413118   41070 main.go:141] libmachine: Using SSH client type: native
	I0717 12:55:29.413482   41070 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140d2c0] 0x1410360 <nil>  [] 0s} 127.0.0.1 55976 <nil> <nil>}
	I0717 12:55:29.413496   41070 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0717 12:55:30.066613   41070 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2023-07-07 14:50:55.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-07-17 19:55:29.361736903 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0717 12:55:30.066633   41070 machine.go:91] provisioned docker machine in 1.851361869s
	I0717 12:55:30.066640   41070 client.go:171] LocalClient.Create took 6.483361397s
	I0717 12:55:30.066661   41070 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-476000" took 6.48343824s
	I0717 12:55:30.066671   41070 start.go:300] post-start starting for "ingress-addon-legacy-476000" (driver="docker")
	I0717 12:55:30.066682   41070 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 12:55:30.066756   41070 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 12:55:30.066825   41070 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-476000
	I0717 12:55:30.116461   41070 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55976 SSHKeyPath:/Users/jenkins/minikube-integration/16890-37879/.minikube/machines/ingress-addon-legacy-476000/id_rsa Username:docker}
	I0717 12:55:30.209816   41070 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 12:55:30.213910   41070 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0717 12:55:30.213939   41070 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0717 12:55:30.213946   41070 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0717 12:55:30.213951   41070 info.go:137] Remote host: Ubuntu 22.04.2 LTS
	I0717 12:55:30.213959   41070 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16890-37879/.minikube/addons for local assets ...
	I0717 12:55:30.214046   41070 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16890-37879/.minikube/files for local assets ...
	I0717 12:55:30.214229   41070 filesync.go:149] local asset: /Users/jenkins/minikube-integration/16890-37879/.minikube/files/etc/ssl/certs/383252.pem -> 383252.pem in /etc/ssl/certs
	I0717 12:55:30.214236   41070 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16890-37879/.minikube/files/etc/ssl/certs/383252.pem -> /etc/ssl/certs/383252.pem
	I0717 12:55:30.214420   41070 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 12:55:30.223030   41070 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16890-37879/.minikube/files/etc/ssl/certs/383252.pem --> /etc/ssl/certs/383252.pem (1708 bytes)
	I0717 12:55:30.243787   41070 start.go:303] post-start completed in 177.091345ms
	I0717 12:55:30.244271   41070 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-476000
	I0717 12:55:30.294067   41070 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/ingress-addon-legacy-476000/config.json ...
	I0717 12:55:30.294498   41070 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 12:55:30.294557   41070 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-476000
	I0717 12:55:30.344145   41070 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55976 SSHKeyPath:/Users/jenkins/minikube-integration/16890-37879/.minikube/machines/ingress-addon-legacy-476000/id_rsa Username:docker}
	I0717 12:55:30.434203   41070 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0717 12:55:30.439323   41070 start.go:128] duration metric: createHost completed in 6.882273558s
	I0717 12:55:30.439342   41070 start.go:83] releasing machines lock for "ingress-addon-legacy-476000", held for 6.882424796s
	I0717 12:55:30.439417   41070 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-476000
	I0717 12:55:30.489838   41070 ssh_runner.go:195] Run: cat /version.json
	I0717 12:55:30.489869   41070 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 12:55:30.489915   41070 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-476000
	I0717 12:55:30.489944   41070 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-476000
	I0717 12:55:30.541254   41070 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55976 SSHKeyPath:/Users/jenkins/minikube-integration/16890-37879/.minikube/machines/ingress-addon-legacy-476000/id_rsa Username:docker}
	I0717 12:55:30.541253   41070 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55976 SSHKeyPath:/Users/jenkins/minikube-integration/16890-37879/.minikube/machines/ingress-addon-legacy-476000/id_rsa Username:docker}
	W0717 12:55:30.734393   41070 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.31.0 -> Actual minikube version: v1.30.1
	! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.31.0 -> Actual minikube version: v1.30.1
	I0717 12:55:30.734484   41070 ssh_runner.go:195] Run: systemctl --version
	I0717 12:55:30.739636   41070 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0717 12:55:30.744880   41070 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0717 12:55:30.767546   41070 cni.go:236] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0717 12:55:30.767633   41070 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0717 12:55:30.783087   41070 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0717 12:55:30.798390   41070 cni.go:314] configured [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 12:55:30.798408   41070 start.go:469] detecting cgroup driver to use...
	I0717 12:55:30.798421   41070 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0717 12:55:30.798525   41070 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 12:55:30.813622   41070 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I0717 12:55:30.823226   41070 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0717 12:55:30.832705   41070 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0717 12:55:30.832764   41070 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0717 12:55:30.842517   41070 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0717 12:55:30.852054   41070 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0717 12:55:30.861548   41070 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0717 12:55:30.871094   41070 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 12:55:30.880180   41070 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0717 12:55:30.889874   41070 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 12:55:30.898366   41070 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 12:55:30.906541   41070 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 12:55:30.971531   41070 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0717 12:55:31.042321   41070 start.go:469] detecting cgroup driver to use...
	I0717 12:55:31.042340   41070 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0717 12:55:31.042404   41070 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0717 12:55:31.054267   41070 cruntime.go:276] skipping containerd shutdown because we are bound to it
	I0717 12:55:31.054339   41070 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0717 12:55:31.065701   41070 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 12:55:31.082422   41070 ssh_runner.go:195] Run: which cri-dockerd
	I0717 12:55:31.086946   41070 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0717 12:55:31.118345   41070 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0717 12:55:31.135989   41070 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0717 12:55:31.224780   41070 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0717 12:55:31.289189   41070 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0717 12:55:31.289209   41070 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0717 12:55:31.326606   41070 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 12:55:31.394576   41070 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0717 12:55:31.632685   41070 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0717 12:55:31.656967   41070 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0717 12:55:31.708455   41070 out.go:204] * Preparing Kubernetes v1.18.20 on Docker 24.0.4 ...
	I0717 12:55:31.708640   41070 cli_runner.go:164] Run: docker exec -t ingress-addon-legacy-476000 dig +short host.docker.internal
	I0717 12:55:31.815889   41070 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I0717 12:55:31.816020   41070 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I0717 12:55:31.821117   41070 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 12:55:31.832199   41070 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" ingress-addon-legacy-476000
	I0717 12:55:31.943351   41070 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0717 12:55:31.943470   41070 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0717 12:55:31.963152   41070 docker.go:636] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I0717 12:55:31.963167   41070 docker.go:642] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
	I0717 12:55:31.963241   41070 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0717 12:55:31.971909   41070 ssh_runner.go:195] Run: which lz4
	I0717 12:55:31.975902   41070 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16890-37879/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0717 12:55:31.976039   41070 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0717 12:55:31.980109   41070 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 12:55:31.980133   41070 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16890-37879/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (424164442 bytes)
	I0717 12:55:37.596251   41070 docker.go:600] Took 5.620222 seconds to copy over tarball
	I0717 12:55:37.596351   41070 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 12:55:39.622044   41070 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.025650939s)
	I0717 12:55:39.622060   41070 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 12:55:39.677276   41070 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0717 12:55:39.686375   41070 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2502 bytes)
	I0717 12:55:39.702063   41070 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 12:55:39.767545   41070 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0717 12:55:40.792944   41070 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.025370801s)
	I0717 12:55:40.793046   41070 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0717 12:55:40.813219   41070 docker.go:636] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I0717 12:55:40.813233   41070 docker.go:642] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
	I0717 12:55:40.813241   41070 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0717 12:55:40.821828   41070 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0717 12:55:40.821828   41070 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 12:55:40.821854   41070 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I0717 12:55:40.821897   41070 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0717 12:55:40.821923   41070 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I0717 12:55:40.821926   41070 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I0717 12:55:40.822005   41070 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I0717 12:55:40.822058   41070 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I0717 12:55:40.826811   41070 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I0717 12:55:40.826886   41070 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I0717 12:55:40.827088   41070 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0717 12:55:40.828312   41070 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0717 12:55:40.828310   41070 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I0717 12:55:40.828339   41070 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I0717 12:55:40.828533   41070 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I0717 12:55:40.829734   41070 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 12:55:41.944974   41070 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	I0717 12:55:41.965897   41070 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346" in container runtime
	I0717 12:55:41.965957   41070 docker.go:316] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I0717 12:55:41.966035   41070 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.18.20
	I0717 12:55:41.985594   41070 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/16890-37879/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.20
	I0717 12:55:42.121289   41070 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	I0717 12:55:42.141641   41070 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba" in container runtime
	I0717 12:55:42.141669   41070 docker.go:316] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I0717 12:55:42.141726   41070 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.18.20
	I0717 12:55:42.161435   41070 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/16890-37879/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.20
	I0717 12:55:42.320606   41070 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	I0717 12:55:42.341572   41070 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290" in container runtime
	I0717 12:55:42.341599   41070 docker.go:316] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0717 12:55:42.341650   41070 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I0717 12:55:42.362297   41070 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0717 12:55:42.362328   41070 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/16890-37879/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.20
	I0717 12:55:42.383480   41070 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0717 12:55:42.383509   41070 docker.go:316] Removing image: registry.k8s.io/pause:3.2
	I0717 12:55:42.383574   41070 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.2
	I0717 12:55:42.404216   41070 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/16890-37879/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0717 12:55:42.572058   41070 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	I0717 12:55:42.592872   41070 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1" in container runtime
	I0717 12:55:42.592912   41070 docker.go:316] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I0717 12:55:42.592979   41070 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.18.20
	I0717 12:55:42.616894   41070 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/16890-37879/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.20
	I0717 12:55:42.877675   41070 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I0717 12:55:42.897189   41070 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5" in container runtime
	I0717 12:55:42.897222   41070 docker.go:316] Removing image: registry.k8s.io/coredns:1.6.7
	I0717 12:55:42.897323   41070 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.6.7
	I0717 12:55:42.917902   41070 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/16890-37879/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7
	I0717 12:55:43.141501   41070 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I0717 12:55:43.162064   41070 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f" in container runtime
	I0717 12:55:43.162089   41070 docker.go:316] Removing image: registry.k8s.io/etcd:3.4.3-0
	I0717 12:55:43.162160   41070 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.4.3-0
	I0717 12:55:43.181395   41070 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/16890-37879/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
	I0717 12:55:43.960401   41070 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 12:55:43.982010   41070 cache_images.go:92] LoadImages completed in 3.168726471s
	W0717 12:55:43.982066   41070 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/16890-37879/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.20: no such file or directory
	X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/16890-37879/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.20: no such file or directory
	I0717 12:55:43.982155   41070 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0717 12:55:44.034502   41070 cni.go:84] Creating CNI manager for ""
	I0717 12:55:44.034521   41070 cni.go:168] CNI unnecessary in this configuration, recommending no CNI
	I0717 12:55:44.034537   41070 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0717 12:55:44.034557   41070 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-476000 NodeName:ingress-addon-legacy-476000 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0717 12:55:44.034669   41070 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "ingress-addon-legacy-476000"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 12:55:44.034730   41070 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=ingress-addon-legacy-476000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-476000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0717 12:55:44.034811   41070 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I0717 12:55:44.043840   41070 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 12:55:44.043909   41070 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 12:55:44.052481   41070 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (354 bytes)
	I0717 12:55:44.068397   41070 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I0717 12:55:44.084333   41070 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2124 bytes)
	I0717 12:55:44.100412   41070 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0717 12:55:44.104620   41070 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 12:55:44.115601   41070 certs.go:56] Setting up /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/ingress-addon-legacy-476000 for IP: 192.168.49.2
	I0717 12:55:44.115620   41070 certs.go:190] acquiring lock for shared ca certs: {Name:mkcb761e9710dc67a00cbdee9d78e096db7e9bb9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 12:55:44.115801   41070 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/16890-37879/.minikube/ca.key
	I0717 12:55:44.115864   41070 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/16890-37879/.minikube/proxy-client-ca.key
	I0717 12:55:44.115905   41070 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/ingress-addon-legacy-476000/client.key
	I0717 12:55:44.115925   41070 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/ingress-addon-legacy-476000/client.crt with IP's: []
	I0717 12:55:44.170483   41070 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/ingress-addon-legacy-476000/client.crt ...
	I0717 12:55:44.170491   41070 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/ingress-addon-legacy-476000/client.crt: {Name:mke7868190f9f2b7d880f75c70102fb181995686 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 12:55:44.170759   41070 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/ingress-addon-legacy-476000/client.key ...
	I0717 12:55:44.170767   41070 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/ingress-addon-legacy-476000/client.key: {Name:mk896a98cbc46a2081667d0128817b4c2748f521 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 12:55:44.170948   41070 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/ingress-addon-legacy-476000/apiserver.key.dd3b5fb2
	I0717 12:55:44.170961   41070 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/ingress-addon-legacy-476000/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0717 12:55:44.271502   41070 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/ingress-addon-legacy-476000/apiserver.crt.dd3b5fb2 ...
	I0717 12:55:44.271513   41070 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/ingress-addon-legacy-476000/apiserver.crt.dd3b5fb2: {Name:mkd27361a3ed34d5d04b07f716283783b401cb35 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 12:55:44.271744   41070 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/ingress-addon-legacy-476000/apiserver.key.dd3b5fb2 ...
	I0717 12:55:44.271752   41070 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/ingress-addon-legacy-476000/apiserver.key.dd3b5fb2: {Name:mkb5195d3fb16d9d8ce1e997ec8aa6e598ef9d03 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 12:55:44.271949   41070 certs.go:337] copying /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/ingress-addon-legacy-476000/apiserver.crt.dd3b5fb2 -> /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/ingress-addon-legacy-476000/apiserver.crt
	I0717 12:55:44.272128   41070 certs.go:341] copying /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/ingress-addon-legacy-476000/apiserver.key.dd3b5fb2 -> /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/ingress-addon-legacy-476000/apiserver.key
	I0717 12:55:44.272281   41070 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/ingress-addon-legacy-476000/proxy-client.key
	I0717 12:55:44.272297   41070 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/ingress-addon-legacy-476000/proxy-client.crt with IP's: []
	I0717 12:55:44.485183   41070 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/ingress-addon-legacy-476000/proxy-client.crt ...
	I0717 12:55:44.485193   41070 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/ingress-addon-legacy-476000/proxy-client.crt: {Name:mkcb3f7917d6b3dda3dff9ad7d986416d2b30270 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 12:55:44.485431   41070 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/ingress-addon-legacy-476000/proxy-client.key ...
	I0717 12:55:44.485439   41070 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/ingress-addon-legacy-476000/proxy-client.key: {Name:mkfff0a82272319e82be0ac161f6169858993969 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 12:55:44.485629   41070 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/ingress-addon-legacy-476000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0717 12:55:44.485660   41070 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/ingress-addon-legacy-476000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0717 12:55:44.485681   41070 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/ingress-addon-legacy-476000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0717 12:55:44.485702   41070 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/ingress-addon-legacy-476000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0717 12:55:44.485724   41070 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16890-37879/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0717 12:55:44.485744   41070 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16890-37879/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0717 12:55:44.485766   41070 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16890-37879/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0717 12:55:44.485787   41070 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16890-37879/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0717 12:55:44.485880   41070 certs.go:437] found cert: /Users/jenkins/minikube-integration/16890-37879/.minikube/certs/Users/jenkins/minikube-integration/16890-37879/.minikube/certs/38325.pem (1338 bytes)
	W0717 12:55:44.485924   41070 certs.go:433] ignoring /Users/jenkins/minikube-integration/16890-37879/.minikube/certs/Users/jenkins/minikube-integration/16890-37879/.minikube/certs/38325_empty.pem, impossibly tiny 0 bytes
	I0717 12:55:44.485938   41070 certs.go:437] found cert: /Users/jenkins/minikube-integration/16890-37879/.minikube/certs/Users/jenkins/minikube-integration/16890-37879/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 12:55:44.485979   41070 certs.go:437] found cert: /Users/jenkins/minikube-integration/16890-37879/.minikube/certs/Users/jenkins/minikube-integration/16890-37879/.minikube/certs/ca.pem (1078 bytes)
	I0717 12:55:44.486017   41070 certs.go:437] found cert: /Users/jenkins/minikube-integration/16890-37879/.minikube/certs/Users/jenkins/minikube-integration/16890-37879/.minikube/certs/cert.pem (1123 bytes)
	I0717 12:55:44.486047   41070 certs.go:437] found cert: /Users/jenkins/minikube-integration/16890-37879/.minikube/certs/Users/jenkins/minikube-integration/16890-37879/.minikube/certs/key.pem (1679 bytes)
	I0717 12:55:44.486117   41070 certs.go:437] found cert: /Users/jenkins/minikube-integration/16890-37879/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/16890-37879/.minikube/files/etc/ssl/certs/383252.pem (1708 bytes)
	I0717 12:55:44.486149   41070 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16890-37879/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0717 12:55:44.486169   41070 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16890-37879/.minikube/certs/38325.pem -> /usr/share/ca-certificates/38325.pem
	I0717 12:55:44.486201   41070 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16890-37879/.minikube/files/etc/ssl/certs/383252.pem -> /usr/share/ca-certificates/383252.pem
	I0717 12:55:44.486689   41070 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/ingress-addon-legacy-476000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0717 12:55:44.508756   41070 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/ingress-addon-legacy-476000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0717 12:55:44.530572   41070 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/ingress-addon-legacy-476000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 12:55:44.552571   41070 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/ingress-addon-legacy-476000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 12:55:44.574139   41070 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16890-37879/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 12:55:44.595467   41070 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16890-37879/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 12:55:44.616465   41070 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16890-37879/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 12:55:44.637546   41070 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16890-37879/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0717 12:55:44.658566   41070 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16890-37879/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 12:55:44.680083   41070 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16890-37879/.minikube/certs/38325.pem --> /usr/share/ca-certificates/38325.pem (1338 bytes)
	I0717 12:55:44.701493   41070 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16890-37879/.minikube/files/etc/ssl/certs/383252.pem --> /usr/share/ca-certificates/383252.pem (1708 bytes)
	I0717 12:55:44.722753   41070 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 12:55:44.739018   41070 ssh_runner.go:195] Run: openssl version
	I0717 12:55:44.745108   41070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/383252.pem && ln -fs /usr/share/ca-certificates/383252.pem /etc/ssl/certs/383252.pem"
	I0717 12:55:44.754694   41070 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/383252.pem
	I0717 12:55:44.759010   41070 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jul 17 19:49 /usr/share/ca-certificates/383252.pem
	I0717 12:55:44.759057   41070 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/383252.pem
	I0717 12:55:44.765897   41070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/383252.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 12:55:44.775345   41070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 12:55:44.784579   41070 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 12:55:44.788939   41070 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jul 17 19:44 /usr/share/ca-certificates/minikubeCA.pem
	I0717 12:55:44.788983   41070 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 12:55:44.795787   41070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 12:55:44.805150   41070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/38325.pem && ln -fs /usr/share/ca-certificates/38325.pem /etc/ssl/certs/38325.pem"
	I0717 12:55:44.814533   41070 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/38325.pem
	I0717 12:55:44.818946   41070 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jul 17 19:49 /usr/share/ca-certificates/38325.pem
	I0717 12:55:44.819013   41070 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/38325.pem
	I0717 12:55:44.825923   41070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/38325.pem /etc/ssl/certs/51391683.0"
	I0717 12:55:44.835471   41070 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0717 12:55:44.839799   41070 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0717 12:55:44.839845   41070 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-476000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-476000 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 12:55:44.839946   41070 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0717 12:55:44.859121   41070 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 12:55:44.868060   41070 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 12:55:44.876710   41070 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0717 12:55:44.876765   41070 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 12:55:44.885503   41070 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 12:55:44.885538   41070 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0717 12:55:44.934802   41070 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0717 12:55:44.934878   41070 kubeadm.go:322] [preflight] Running pre-flight checks
	I0717 12:55:45.176891   41070 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 12:55:45.176983   41070 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 12:55:45.177078   41070 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 12:55:45.349574   41070 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 12:55:45.350161   41070 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 12:55:45.350227   41070 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0717 12:55:45.422618   41070 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 12:55:45.444098   41070 out.go:204]   - Generating certificates and keys ...
	I0717 12:55:45.444186   41070 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0717 12:55:45.444255   41070 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0717 12:55:45.657996   41070 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0717 12:55:46.011985   41070 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0717 12:55:46.097672   41070 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0717 12:55:46.246879   41070 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0717 12:55:46.312428   41070 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0717 12:55:46.312556   41070 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-476000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0717 12:55:46.740233   41070 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0717 12:55:46.740366   41070 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-476000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0717 12:55:46.944962   41070 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0717 12:55:47.049499   41070 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0717 12:55:47.223145   41070 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0717 12:55:47.223204   41070 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 12:55:47.359024   41070 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 12:55:47.467822   41070 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 12:55:47.681112   41070 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 12:55:47.997105   41070 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 12:55:47.997562   41070 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 12:55:48.018955   41070 out.go:204]   - Booting up control plane ...
	I0717 12:55:48.019065   41070 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 12:55:48.019183   41070 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 12:55:48.019308   41070 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 12:55:48.019380   41070 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 12:55:48.019523   41070 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0717 12:56:28.007130   41070 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0717 12:56:28.008305   41070 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 12:56:28.008560   41070 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 12:56:33.009095   41070 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 12:56:33.009368   41070 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 12:56:43.010456   41070 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 12:56:43.010644   41070 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 12:57:03.011424   41070 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 12:57:03.011598   41070 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 12:57:43.013983   41070 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 12:57:43.014246   41070 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 12:57:43.014258   41070 kubeadm.go:322] 
	I0717 12:57:43.014344   41070 kubeadm.go:322] 	Unfortunately, an error has occurred:
	I0717 12:57:43.014440   41070 kubeadm.go:322] 		timed out waiting for the condition
	I0717 12:57:43.014455   41070 kubeadm.go:322] 
	I0717 12:57:43.014509   41070 kubeadm.go:322] 	This error is likely caused by:
	I0717 12:57:43.014592   41070 kubeadm.go:322] 		- The kubelet is not running
	I0717 12:57:43.014746   41070 kubeadm.go:322] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0717 12:57:43.014764   41070 kubeadm.go:322] 
	I0717 12:57:43.014885   41070 kubeadm.go:322] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0717 12:57:43.014944   41070 kubeadm.go:322] 		- 'systemctl status kubelet'
	I0717 12:57:43.014990   41070 kubeadm.go:322] 		- 'journalctl -xeu kubelet'
	I0717 12:57:43.015001   41070 kubeadm.go:322] 
	I0717 12:57:43.015141   41070 kubeadm.go:322] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0717 12:57:43.015242   41070 kubeadm.go:322] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0717 12:57:43.015252   41070 kubeadm.go:322] 
	I0717 12:57:43.015354   41070 kubeadm.go:322] 	Here is one example how you may list all Kubernetes containers running in docker:
	I0717 12:57:43.015439   41070 kubeadm.go:322] 		- 'docker ps -a | grep kube | grep -v pause'
	I0717 12:57:43.015519   41070 kubeadm.go:322] 		Once you have found the failing container, you can inspect its logs with:
	I0717 12:57:43.015559   41070 kubeadm.go:322] 		- 'docker logs CONTAINERID'
	I0717 12:57:43.015564   41070 kubeadm.go:322] 
	I0717 12:57:43.017248   41070 kubeadm.go:322] W0717 19:55:44.933872    1675 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0717 12:57:43.017409   41070 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0717 12:57:43.017475   41070 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0717 12:57:43.017587   41070 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.4. Latest validated version: 19.03
	I0717 12:57:43.017676   41070 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 12:57:43.017790   41070 kubeadm.go:322] W0717 19:55:48.002028    1675 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0717 12:57:43.017894   41070 kubeadm.go:322] W0717 19:55:48.002744    1675 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0717 12:57:43.017965   41070 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0717 12:57:43.018030   41070 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	W0717 12:57:43.018115   41070 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-476000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-476000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0717 19:55:44.933872    1675 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.4. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0717 19:55:48.002028    1675 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0717 19:55:48.002744    1675 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-476000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-476000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0717 19:55:44.933872    1675 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.4. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0717 19:55:48.002028    1675 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0717 19:55:48.002744    1675 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0717 12:57:43.018150   41070 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0717 12:57:43.448974   41070 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 12:57:43.459952   41070 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0717 12:57:43.460012   41070 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 12:57:43.468901   41070 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 12:57:43.468925   41070 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0717 12:57:43.518000   41070 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0717 12:57:43.518050   41070 kubeadm.go:322] [preflight] Running pre-flight checks
	I0717 12:57:43.760327   41070 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 12:57:43.760417   41070 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 12:57:43.760508   41070 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 12:57:43.936265   41070 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 12:57:43.936926   41070 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 12:57:43.936975   41070 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0717 12:57:44.007809   41070 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 12:57:44.029218   41070 out.go:204]   - Generating certificates and keys ...
	I0717 12:57:44.029292   41070 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0717 12:57:44.029360   41070 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0717 12:57:44.029446   41070 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0717 12:57:44.029495   41070 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0717 12:57:44.029558   41070 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0717 12:57:44.029617   41070 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0717 12:57:44.029679   41070 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0717 12:57:44.029741   41070 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0717 12:57:44.029817   41070 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0717 12:57:44.029894   41070 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0717 12:57:44.029929   41070 kubeadm.go:322] [certs] Using the existing "sa" key
	I0717 12:57:44.029977   41070 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 12:57:44.083286   41070 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 12:57:44.165899   41070 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 12:57:44.341772   41070 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 12:57:44.545955   41070 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 12:57:44.546415   41070 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 12:57:44.567789   41070 out.go:204]   - Booting up control plane ...
	I0717 12:57:44.567882   41070 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 12:57:44.567959   41070 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 12:57:44.568017   41070 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 12:57:44.568082   41070 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 12:57:44.568196   41070 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0717 12:58:24.557226   41070 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0717 12:58:24.558014   41070 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 12:58:24.558192   41070 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 12:58:29.560029   41070 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 12:58:29.560236   41070 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 12:58:39.560574   41070 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 12:58:39.560747   41070 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 12:58:59.563103   41070 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 12:58:59.563332   41070 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 12:59:39.564479   41070 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 12:59:39.564646   41070 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 12:59:39.564657   41070 kubeadm.go:322] 
	I0717 12:59:39.564703   41070 kubeadm.go:322] 	Unfortunately, an error has occurred:
	I0717 12:59:39.564744   41070 kubeadm.go:322] 		timed out waiting for the condition
	I0717 12:59:39.564751   41070 kubeadm.go:322] 
	I0717 12:59:39.564777   41070 kubeadm.go:322] 	This error is likely caused by:
	I0717 12:59:39.564811   41070 kubeadm.go:322] 		- The kubelet is not running
	I0717 12:59:39.564878   41070 kubeadm.go:322] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0717 12:59:39.564882   41070 kubeadm.go:322] 
	I0717 12:59:39.564944   41070 kubeadm.go:322] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0717 12:59:39.564969   41070 kubeadm.go:322] 		- 'systemctl status kubelet'
	I0717 12:59:39.564997   41070 kubeadm.go:322] 		- 'journalctl -xeu kubelet'
	I0717 12:59:39.565000   41070 kubeadm.go:322] 
	I0717 12:59:39.565106   41070 kubeadm.go:322] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0717 12:59:39.565194   41070 kubeadm.go:322] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0717 12:59:39.565206   41070 kubeadm.go:322] 
	I0717 12:59:39.565284   41070 kubeadm.go:322] 	Here is one example how you may list all Kubernetes containers running in docker:
	I0717 12:59:39.565326   41070 kubeadm.go:322] 		- 'docker ps -a | grep kube | grep -v pause'
	I0717 12:59:39.565390   41070 kubeadm.go:322] 		Once you have found the failing container, you can inspect its logs with:
	I0717 12:59:39.565421   41070 kubeadm.go:322] 		- 'docker logs CONTAINERID'
	I0717 12:59:39.565428   41070 kubeadm.go:322] 
	I0717 12:59:39.567232   41070 kubeadm.go:322] W0717 19:57:43.516969    4148 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0717 12:59:39.567384   41070 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0717 12:59:39.567459   41070 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0717 12:59:39.567605   41070 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.4. Latest validated version: 19.03
	I0717 12:59:39.567676   41070 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 12:59:39.567777   41070 kubeadm.go:322] W0717 19:57:44.552173    4148 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0717 12:59:39.567878   41070 kubeadm.go:322] W0717 19:57:44.553516    4148 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0717 12:59:39.567947   41070 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0717 12:59:39.568002   41070 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0717 12:59:39.568037   41070 kubeadm.go:406] StartCluster complete in 3m54.725745809s
	I0717 12:59:39.568127   41070 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 12:59:39.586630   41070 logs.go:284] 0 containers: []
	W0717 12:59:39.586643   41070 logs.go:286] No container was found matching "kube-apiserver"
	I0717 12:59:39.586720   41070 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 12:59:39.605168   41070 logs.go:284] 0 containers: []
	W0717 12:59:39.605181   41070 logs.go:286] No container was found matching "etcd"
	I0717 12:59:39.605249   41070 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 12:59:39.623416   41070 logs.go:284] 0 containers: []
	W0717 12:59:39.623429   41070 logs.go:286] No container was found matching "coredns"
	I0717 12:59:39.623511   41070 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 12:59:39.642493   41070 logs.go:284] 0 containers: []
	W0717 12:59:39.642507   41070 logs.go:286] No container was found matching "kube-scheduler"
	I0717 12:59:39.642593   41070 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 12:59:39.661009   41070 logs.go:284] 0 containers: []
	W0717 12:59:39.661021   41070 logs.go:286] No container was found matching "kube-proxy"
	I0717 12:59:39.661094   41070 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 12:59:39.679558   41070 logs.go:284] 0 containers: []
	W0717 12:59:39.679573   41070 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 12:59:39.679648   41070 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 12:59:39.698528   41070 logs.go:284] 0 containers: []
	W0717 12:59:39.698544   41070 logs.go:286] No container was found matching "kindnet"
	I0717 12:59:39.698558   41070 logs.go:123] Gathering logs for kubelet ...
	I0717 12:59:39.698572   41070 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 12:59:39.737615   41070 logs.go:123] Gathering logs for dmesg ...
	I0717 12:59:39.737628   41070 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 12:59:39.752707   41070 logs.go:123] Gathering logs for describe nodes ...
	I0717 12:59:39.752721   41070 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 12:59:39.808890   41070 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 12:59:39.808902   41070 logs.go:123] Gathering logs for Docker ...
	I0717 12:59:39.808909   41070 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 12:59:39.824974   41070 logs.go:123] Gathering logs for container status ...
	I0717 12:59:39.824986   41070 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0717 12:59:39.875913   41070 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0717 19:57:43.516969    4148 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.4. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0717 19:57:44.552173    4148 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0717 19:57:44.553516    4148 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0717 12:59:39.875937   41070 out.go:239] * 
	* 
	W0717 12:59:39.875977   41070 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0717 19:57:43.516969    4148 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.4. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0717 19:57:44.552173    4148 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0717 19:57:44.553516    4148 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0717 19:57:43.516969    4148 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.4. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0717 19:57:44.552173    4148 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0717 19:57:44.553516    4148 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0717 12:59:39.875993   41070 out.go:239] * 
	* 
	W0717 12:59:39.876618   41070 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 12:59:39.939316   41070 out.go:177] 
	W0717 12:59:40.002188   41070 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0717 19:57:43.516969    4148 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.4. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0717 19:57:44.552173    4148 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0717 19:57:44.553516    4148 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0717 19:57:43.516969    4148 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.4. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0717 19:57:44.552173    4148 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0717 19:57:44.553516    4148 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0717 12:59:40.002260   41070 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0717 12:59:40.002301   41070 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0717 12:59:40.023399   41070 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:41: failed to start minikube with args: "out/minikube-darwin-amd64 start -p ingress-addon-legacy-476000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker " : exit status 109
--- FAIL: TestIngressAddonLegacy/StartLegacyK8sCluster (268.53s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (97.75s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-476000 addons enable ingress --alsologtostderr -v=5
E0717 13:00:20.215702   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/functional-625000/client.crt: no such file or directory
ingress_addon_legacy_test.go:70: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ingress-addon-legacy-476000 addons enable ingress --alsologtostderr -v=5: exit status 10 (1m37.34255683s)

                                                
                                                
-- stdout --
	* ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	* After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	  - Using image registry.k8s.io/ingress-nginx/controller:v0.49.3
	  - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	  - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	* Verifying ingress addon...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 12:59:40.171138   41346 out.go:296] Setting OutFile to fd 1 ...
	I0717 12:59:40.171721   41346 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 12:59:40.171727   41346 out.go:309] Setting ErrFile to fd 2...
	I0717 12:59:40.171731   41346 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 12:59:40.171914   41346 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/16890-37879/.minikube/bin
	I0717 12:59:40.172523   41346 config.go:182] Loaded profile config "ingress-addon-legacy-476000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0717 12:59:40.172541   41346 addons.go:594] checking whether the cluster is paused
	I0717 12:59:40.172621   41346 config.go:182] Loaded profile config "ingress-addon-legacy-476000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0717 12:59:40.172642   41346 host.go:66] Checking if "ingress-addon-legacy-476000" exists ...
	I0717 12:59:40.173024   41346 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-476000 --format={{.State.Status}}
	I0717 12:59:40.223167   41346 ssh_runner.go:195] Run: systemctl --version
	I0717 12:59:40.223270   41346 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-476000
	I0717 12:59:40.273988   41346 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55976 SSHKeyPath:/Users/jenkins/minikube-integration/16890-37879/.minikube/machines/ingress-addon-legacy-476000/id_rsa Username:docker}
	I0717 12:59:40.364123   41346 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0717 12:59:40.404384   41346 out.go:177] * ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	I0717 12:59:40.425581   41346 config.go:182] Loaded profile config "ingress-addon-legacy-476000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0717 12:59:40.425610   41346 addons.go:69] Setting ingress=true in profile "ingress-addon-legacy-476000"
	I0717 12:59:40.425622   41346 addons.go:231] Setting addon ingress=true in "ingress-addon-legacy-476000"
	I0717 12:59:40.425676   41346 host.go:66] Checking if "ingress-addon-legacy-476000" exists ...
	I0717 12:59:40.426272   41346 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-476000 --format={{.State.Status}}
	I0717 12:59:40.497300   41346 out.go:177] * After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	I0717 12:59:40.518266   41346 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v0.49.3
	I0717 12:59:40.539270   41346 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I0717 12:59:40.560400   41346 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I0717 12:59:40.581706   41346 addons.go:423] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0717 12:59:40.581737   41346 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (15618 bytes)
	I0717 12:59:40.581878   41346 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-476000
	I0717 12:59:40.630931   41346 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55976 SSHKeyPath:/Users/jenkins/minikube-integration/16890-37879/.minikube/machines/ingress-addon-legacy-476000/id_rsa Username:docker}
	I0717 12:59:40.730137   41346 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0717 12:59:40.783104   41346 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0717 12:59:40.783132   41346 retry.go:31] will retry after 149.518939ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0717 12:59:40.934082   41346 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0717 12:59:40.988858   41346 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0717 12:59:40.988878   41346 retry.go:31] will retry after 416.843801ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0717 12:59:41.408021   41346 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0717 12:59:41.464563   41346 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0717 12:59:41.464586   41346 retry.go:31] will retry after 674.283621ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0717 12:59:42.140001   41346 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0717 12:59:42.195890   41346 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0717 12:59:42.195908   41346 retry.go:31] will retry after 1.255293529s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0717 12:59:43.453494   41346 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0717 12:59:43.510005   41346 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0717 12:59:43.510022   41346 retry.go:31] will retry after 794.222788ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0717 12:59:44.304784   41346 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0717 12:59:44.359858   41346 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0717 12:59:44.359886   41346 retry.go:31] will retry after 1.875025606s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0717 12:59:46.235117   41346 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0717 12:59:46.289216   41346 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0717 12:59:46.289232   41346 retry.go:31] will retry after 3.593639699s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0717 12:59:49.885198   41346 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0717 12:59:49.941972   41346 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0717 12:59:49.941989   41346 retry.go:31] will retry after 4.977113412s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0717 12:59:54.921429   41346 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0717 12:59:54.977436   41346 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0717 12:59:54.977454   41346 retry.go:31] will retry after 8.009326727s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0717 13:00:02.989194   41346 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0717 13:00:03.045732   41346 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0717 13:00:03.045753   41346 retry.go:31] will retry after 12.637619517s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0717 13:00:15.684623   41346 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0717 13:00:15.740786   41346 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0717 13:00:15.740810   41346 retry.go:31] will retry after 15.862272553s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0717 13:00:31.605526   41346 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0717 13:00:31.659770   41346 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0717 13:00:31.659787   41346 retry.go:31] will retry after 13.849190495s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0717 13:00:45.509361   41346 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0717 13:00:45.564383   41346 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0717 13:00:45.564408   41346 retry.go:31] will retry after 31.730475038s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0717 13:01:17.295481   41346 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0717 13:01:17.351472   41346 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0717 13:01:17.351501   41346 addons.go:467] Verifying addon ingress=true in "ingress-addon-legacy-476000"
	I0717 13:01:17.374751   41346 out.go:177] * Verifying ingress addon...
	I0717 13:01:17.396060   41346 out.go:177] 
	W0717 13:01:17.417739   41346 out.go:239] X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 get kube-client to validate ingress addon: client config: context "ingress-addon-legacy-476000" does not exist: client config: context "ingress-addon-legacy-476000" does not exist]
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 get kube-client to validate ingress addon: client config: context "ingress-addon-legacy-476000" does not exist: client config: context "ingress-addon-legacy-476000" does not exist]
	W0717 13:01:17.417772   41346 out.go:239] * 
	* 
	W0717 13:01:17.426277   41346 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_addons_ecab7b1157b569c129811d3c2b680fbca2a6f3d2_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_addons_ecab7b1157b569c129811d3c2b680fbca2a6f3d2_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 13:01:17.447615   41346 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:71: failed to enable ingress addon: exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddonActivation]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-476000
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-476000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2de91322100e3abb179f07d7022d08ac386dfdc016af9ddb009d063d214213d2",
	        "Created": "2023-07-17T19:55:27.365066758Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 477717,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-07-17T19:55:27.571476732Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6cc01e6091959400f260dc442708e7c71630b58dab1f7c344cb00926bd84950",
	        "ResolvConfPath": "/var/lib/docker/containers/2de91322100e3abb179f07d7022d08ac386dfdc016af9ddb009d063d214213d2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2de91322100e3abb179f07d7022d08ac386dfdc016af9ddb009d063d214213d2/hostname",
	        "HostsPath": "/var/lib/docker/containers/2de91322100e3abb179f07d7022d08ac386dfdc016af9ddb009d063d214213d2/hosts",
	        "LogPath": "/var/lib/docker/containers/2de91322100e3abb179f07d7022d08ac386dfdc016af9ddb009d063d214213d2/2de91322100e3abb179f07d7022d08ac386dfdc016af9ddb009d063d214213d2-json.log",
	        "Name": "/ingress-addon-legacy-476000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-476000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-476000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/da08947f7c79266a29da16592819a6939e15d3320163d8e343bcd28964696b3a-init/diff:/var/lib/docker/overlay2/e56ac82b253363a3e2a8ef1d32b035837a0160e70c091e0204df14a88b273cb0/diff",
	                "MergedDir": "/var/lib/docker/overlay2/da08947f7c79266a29da16592819a6939e15d3320163d8e343bcd28964696b3a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/da08947f7c79266a29da16592819a6939e15d3320163d8e343bcd28964696b3a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/da08947f7c79266a29da16592819a6939e15d3320163d8e343bcd28964696b3a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-476000",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-476000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-476000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-476000",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-476000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7a497645202878ee00519598f7cc6ec6f0c9933a7e9db64b2f030ed3d0addfb6",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55976"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55977"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55978"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55979"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55975"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/7a4976452028",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-476000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "2de91322100e",
	                        "ingress-addon-legacy-476000"
	                    ],
	                    "NetworkID": "e5b3dbfe0ad1d7aa8661dcf135e0710cd99c7b2062d6980453367f2d6c71d954",
	                    "EndpointID": "10619dbdb9ee12d018547c3fe75263ff42992fa384ff4e0cd1c926aa7345c3bb",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-476000 -n ingress-addon-legacy-476000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-476000 -n ingress-addon-legacy-476000: exit status 6 (355.366962ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 13:01:17.866925   41392 status.go:415] kubeconfig endpoint: extract IP: "ingress-addon-legacy-476000" does not appear in /Users/jenkins/minikube-integration/16890-37879/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-476000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (97.75s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (106.47s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-476000 addons enable ingress-dns --alsologtostderr -v=5
E0717 13:02:33.496980   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/addons-702000/client.crt: no such file or directory
E0717 13:02:36.362116   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/functional-625000/client.crt: no such file or directory
ingress_addon_legacy_test.go:79: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ingress-addon-legacy-476000 addons enable ingress-dns --alsologtostderr -v=5: exit status 10 (1m46.053879119s)

                                                
                                                
-- stdout --
	* ingress-dns is an addon maintained by minikube. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	* After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	  - Using image cryptexlabs/minikube-ingress-dns:0.3.0
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 13:01:17.919696   41402 out.go:296] Setting OutFile to fd 1 ...
	I0717 13:01:17.920373   41402 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 13:01:17.920379   41402 out.go:309] Setting ErrFile to fd 2...
	I0717 13:01:17.920383   41402 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 13:01:17.920570   41402 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/16890-37879/.minikube/bin
	I0717 13:01:17.921135   41402 config.go:182] Loaded profile config "ingress-addon-legacy-476000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0717 13:01:17.921153   41402 addons.go:594] checking whether the cluster is paused
	I0717 13:01:17.921232   41402 config.go:182] Loaded profile config "ingress-addon-legacy-476000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0717 13:01:17.921253   41402 host.go:66] Checking if "ingress-addon-legacy-476000" exists ...
	I0717 13:01:17.921658   41402 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-476000 --format={{.State.Status}}
	I0717 13:01:17.970382   41402 ssh_runner.go:195] Run: systemctl --version
	I0717 13:01:17.970473   41402 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-476000
	I0717 13:01:18.019468   41402 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55976 SSHKeyPath:/Users/jenkins/minikube-integration/16890-37879/.minikube/machines/ingress-addon-legacy-476000/id_rsa Username:docker}
	I0717 13:01:18.109305   41402 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0717 13:01:18.149472   41402 out.go:177] * ingress-dns is an addon maintained by minikube. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	I0717 13:01:18.170671   41402 config.go:182] Loaded profile config "ingress-addon-legacy-476000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0717 13:01:18.170701   41402 addons.go:69] Setting ingress-dns=true in profile "ingress-addon-legacy-476000"
	I0717 13:01:18.170713   41402 addons.go:231] Setting addon ingress-dns=true in "ingress-addon-legacy-476000"
	I0717 13:01:18.170768   41402 host.go:66] Checking if "ingress-addon-legacy-476000" exists ...
	I0717 13:01:18.171371   41402 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-476000 --format={{.State.Status}}
	I0717 13:01:18.243525   41402 out.go:177] * After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	I0717 13:01:18.265468   41402 out.go:177]   - Using image cryptexlabs/minikube-ingress-dns:0.3.0
	I0717 13:01:18.286650   41402 addons.go:423] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0717 13:01:18.286680   41402 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2434 bytes)
	I0717 13:01:18.286832   41402 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-476000
	I0717 13:01:18.336947   41402 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55976 SSHKeyPath:/Users/jenkins/minikube-integration/16890-37879/.minikube/machines/ingress-addon-legacy-476000/id_rsa Username:docker}
	I0717 13:01:18.437823   41402 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0717 13:01:18.490410   41402 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0717 13:01:18.490443   41402 retry.go:31] will retry after 371.977596ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0717 13:01:18.864265   41402 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0717 13:01:18.920469   41402 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0717 13:01:18.920487   41402 retry.go:31] will retry after 489.350429ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0717 13:01:19.412150   41402 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0717 13:01:19.466477   41402 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0717 13:01:19.466499   41402 retry.go:31] will retry after 323.195945ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0717 13:01:19.791933   41402 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0717 13:01:19.849251   41402 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0717 13:01:19.849269   41402 retry.go:31] will retry after 1.107706364s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0717 13:01:20.959216   41402 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0717 13:01:21.018273   41402 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0717 13:01:21.018292   41402 retry.go:31] will retry after 1.496822701s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0717 13:01:22.516523   41402 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0717 13:01:22.571838   41402 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0717 13:01:22.571857   41402 retry.go:31] will retry after 1.666248959s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0717 13:01:24.238867   41402 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0717 13:01:24.293119   41402 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0717 13:01:24.293137   41402 retry.go:31] will retry after 2.713844506s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0717 13:01:27.008613   41402 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0717 13:01:27.063914   41402 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0717 13:01:27.063931   41402 retry.go:31] will retry after 5.339462163s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0717 13:01:32.403750   41402 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0717 13:01:32.458201   41402 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0717 13:01:32.458223   41402 retry.go:31] will retry after 8.000107636s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0717 13:01:40.458806   41402 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0717 13:01:40.517043   41402 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0717 13:01:40.517059   41402 retry.go:31] will retry after 12.433502885s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0717 13:01:52.951280   41402 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0717 13:01:53.005053   41402 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0717 13:01:53.005081   41402 retry.go:31] will retry after 9.860568269s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0717 13:02:02.867998   41402 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0717 13:02:02.921764   41402 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0717 13:02:02.941974   41402 retry.go:31] will retry after 31.281091467s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0717 13:02:34.224496   41402 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0717 13:02:34.279490   41402 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0717 13:02:34.279515   41402 retry.go:31] will retry after 29.494767061s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0717 13:03:03.776889   41402 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0717 13:03:03.832627   41402 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0717 13:03:03.854557   41402 out.go:177] 
	W0717 13:03:03.876551   41402 out.go:239] X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	W0717 13:03:03.876600   41402 out.go:239] * 
	* 
	W0717 13:03:03.883655   41402 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_addons_26091442b04c5e26589fdfa18b5031c2ff11dd6b_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_addons_26091442b04c5e26589fdfa18b5031c2ff11dd6b_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 13:03:03.905454   41402 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:80: failed to enable ingress-dns addon: exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-476000
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-476000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2de91322100e3abb179f07d7022d08ac386dfdc016af9ddb009d063d214213d2",
	        "Created": "2023-07-17T19:55:27.365066758Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 477717,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-07-17T19:55:27.571476732Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6cc01e6091959400f260dc442708e7c71630b58dab1f7c344cb00926bd84950",
	        "ResolvConfPath": "/var/lib/docker/containers/2de91322100e3abb179f07d7022d08ac386dfdc016af9ddb009d063d214213d2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2de91322100e3abb179f07d7022d08ac386dfdc016af9ddb009d063d214213d2/hostname",
	        "HostsPath": "/var/lib/docker/containers/2de91322100e3abb179f07d7022d08ac386dfdc016af9ddb009d063d214213d2/hosts",
	        "LogPath": "/var/lib/docker/containers/2de91322100e3abb179f07d7022d08ac386dfdc016af9ddb009d063d214213d2/2de91322100e3abb179f07d7022d08ac386dfdc016af9ddb009d063d214213d2-json.log",
	        "Name": "/ingress-addon-legacy-476000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-476000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-476000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/da08947f7c79266a29da16592819a6939e15d3320163d8e343bcd28964696b3a-init/diff:/var/lib/docker/overlay2/e56ac82b253363a3e2a8ef1d32b035837a0160e70c091e0204df14a88b273cb0/diff",
	                "MergedDir": "/var/lib/docker/overlay2/da08947f7c79266a29da16592819a6939e15d3320163d8e343bcd28964696b3a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/da08947f7c79266a29da16592819a6939e15d3320163d8e343bcd28964696b3a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/da08947f7c79266a29da16592819a6939e15d3320163d8e343bcd28964696b3a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-476000",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-476000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-476000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-476000",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-476000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7a497645202878ee00519598f7cc6ec6f0c9933a7e9db64b2f030ed3d0addfb6",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55976"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55977"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55978"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55979"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55975"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/7a4976452028",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-476000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "2de91322100e",
	                        "ingress-addon-legacy-476000"
	                    ],
	                    "NetworkID": "e5b3dbfe0ad1d7aa8661dcf135e0710cd99c7b2062d6980453367f2d6c71d954",
	                    "EndpointID": "10619dbdb9ee12d018547c3fe75263ff42992fa384ff4e0cd1c926aa7345c3bb",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-476000 -n ingress-addon-legacy-476000
E0717 13:03:04.059628   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/functional-625000/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-476000 -n ingress-addon-legacy-476000: exit status 6 (359.986019ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 13:03:04.335122   41430 status.go:415] kubeconfig endpoint: extract IP: "ingress-addon-legacy-476000" does not appear in /Users/jenkins/minikube-integration/16890-37879/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-476000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (106.47s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (0.41s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:177: failed to get Kubernetes client: <nil>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-476000
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-476000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2de91322100e3abb179f07d7022d08ac386dfdc016af9ddb009d063d214213d2",
	        "Created": "2023-07-17T19:55:27.365066758Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 477717,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-07-17T19:55:27.571476732Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6cc01e6091959400f260dc442708e7c71630b58dab1f7c344cb00926bd84950",
	        "ResolvConfPath": "/var/lib/docker/containers/2de91322100e3abb179f07d7022d08ac386dfdc016af9ddb009d063d214213d2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2de91322100e3abb179f07d7022d08ac386dfdc016af9ddb009d063d214213d2/hostname",
	        "HostsPath": "/var/lib/docker/containers/2de91322100e3abb179f07d7022d08ac386dfdc016af9ddb009d063d214213d2/hosts",
	        "LogPath": "/var/lib/docker/containers/2de91322100e3abb179f07d7022d08ac386dfdc016af9ddb009d063d214213d2/2de91322100e3abb179f07d7022d08ac386dfdc016af9ddb009d063d214213d2-json.log",
	        "Name": "/ingress-addon-legacy-476000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-476000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-476000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/da08947f7c79266a29da16592819a6939e15d3320163d8e343bcd28964696b3a-init/diff:/var/lib/docker/overlay2/e56ac82b253363a3e2a8ef1d32b035837a0160e70c091e0204df14a88b273cb0/diff",
	                "MergedDir": "/var/lib/docker/overlay2/da08947f7c79266a29da16592819a6939e15d3320163d8e343bcd28964696b3a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/da08947f7c79266a29da16592819a6939e15d3320163d8e343bcd28964696b3a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/da08947f7c79266a29da16592819a6939e15d3320163d8e343bcd28964696b3a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-476000",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-476000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-476000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-476000",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-476000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7a497645202878ee00519598f7cc6ec6f0c9933a7e9db64b2f030ed3d0addfb6",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55976"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55977"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55978"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55979"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55975"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/7a4976452028",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-476000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "2de91322100e",
	                        "ingress-addon-legacy-476000"
	                    ],
	                    "NetworkID": "e5b3dbfe0ad1d7aa8661dcf135e0710cd99c7b2062d6980453367f2d6c71d954",
	                    "EndpointID": "10619dbdb9ee12d018547c3fe75263ff42992fa384ff4e0cd1c926aa7345c3bb",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-476000 -n ingress-addon-legacy-476000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-476000 -n ingress-addon-legacy-476000: exit status 6 (358.884488ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 13:03:04.744333   41442 status.go:415] kubeconfig endpoint: extract IP: "ingress-addon-legacy-476000" does not appear in /Users/jenkins/minikube-integration/16890-37879/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-476000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (0.41s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (64.92s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:132: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.9.0.25191787.exe start -p running-upgrade-414000 --memory=2200 --vm-driver=docker 
E0717 13:22:33.540766   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/addons-702000/client.crt: no such file or directory
E0717 13:22:36.401737   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/functional-625000/client.crt: no such file or directory
version_upgrade_test.go:132: (dbg) Non-zero exit: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.9.0.25191787.exe start -p running-upgrade-414000 --memory=2200 --vm-driver=docker : exit status 70 (50.199480119s)

                                                
                                                
-- stdout --
	! [running-upgrade-414000] minikube v1.9.0 on Darwin 13.4.1
	  - MINIKUBE_LOCATION=16890
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16890-37879/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	  - KUBECONFIG=/var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/legacy_kubeconfig2804630255
	* Using the docker driver based on user configuration
	* Pulling base image ...
	* Downloading Kubernetes v1.18.0 preload ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	! StartHost failed, but will try again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-07-17 20:22:28.820409909 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* Deleting "running-upgrade-414000" in docker ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	* StartHost failed again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-07-17 20:22:42.780409776 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	  - Run: "minikube delete -p running-upgrade-414000", then "minikube start -p running-upgrade-414000 --alsologtostderr -v=1" to try again with more logging

                                                
                                                
-- /stdout --
** stderr ** 
	* minikube 1.30.1 is available! Download it: https://github.com/kubernetes/minikube/releases/tag/v1.30.1
	* To disable this notice, run: 'minikube config set WantUpdateNotification false'
	
	    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 184.36 KiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 2.11 MiB /    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 10.11 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 16.78 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 24.56 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 32.30 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 40.09 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 47.54 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 53.88 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 61.75 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 69.42 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 76.29 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
: 83.22 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 91.00 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 98.48 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 106.18 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 110.84 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 119.83 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 126.37 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 133.40 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 140.45 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 147.58 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 154.71 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 160.42 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 167.49 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.
lz4: 173.90 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 181.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 188.05 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 193.16 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 196.09 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 202.78 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 209.13 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 215.38 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 221.52 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 227.73 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 233.93 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 240.22 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 246.44 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.t
ar.lz4: 252.75 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 258.98 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 268.50 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 275.30 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 280.47 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 289.22 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 295.84 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 302.40 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 308.47 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 314.11 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 320.60 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 327.57 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 333.32 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd6
4.tar.lz4: 340.19 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 346.82 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 353.25 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 361.74 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 368.66 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 372.68 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 380.12 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 386.03 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 393.53 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 399.52 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 406.85 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 414.04 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 420.97 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-a
md64.tar.lz4: 428.10 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 435.25 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 442.59 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 449.98 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 458.69 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 466.45 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 476.87 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 488.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 496.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 504.73 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 518.36 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 529.19 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 542.25 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay
2-amd64.tar.lz4: 542.91 MiB* 
	X Unable to start VM after repeated tries. Please try {{'minikube delete' if possible: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-07-17 20:22:42.780409776 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:132: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.9.0.25191787.exe start -p running-upgrade-414000 --memory=2200 --vm-driver=docker 
version_upgrade_test.go:132: (dbg) Non-zero exit: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.9.0.25191787.exe start -p running-upgrade-414000 --memory=2200 --vm-driver=docker : exit status 70 (4.158488659s)

                                                
                                                
-- stdout --
	* [running-upgrade-414000] minikube v1.9.0 on Darwin 13.4.1
	  - MINIKUBE_LOCATION=16890
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16890-37879/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	  - KUBECONFIG=/var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/legacy_kubeconfig1251954057
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Updating the running docker "running-upgrade-414000" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:132: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.9.0.25191787.exe start -p running-upgrade-414000 --memory=2200 --vm-driver=docker 
version_upgrade_test.go:132: (dbg) Non-zero exit: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.9.0.25191787.exe start -p running-upgrade-414000 --memory=2200 --vm-driver=docker : exit status 70 (4.078923261s)

                                                
                                                
-- stdout --
	* [running-upgrade-414000] minikube v1.9.0 on Darwin 13.4.1
	  - MINIKUBE_LOCATION=16890
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16890-37879/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	  - KUBECONFIG=/var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/legacy_kubeconfig3167263960
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Updating the running docker "running-upgrade-414000" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:138: legacy v1.9.0 start failed: exit status 70
panic.go:522: *** TestRunningBinaryUpgrade FAILED at 2023-07-17 13:22:55.1109 -0700 PDT m=+2359.710531223
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestRunningBinaryUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect running-upgrade-414000
helpers_test.go:235: (dbg) docker inspect running-upgrade-414000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "aaf2ef3ea016b9eb03df758859ec5ab8e5706a78ea89499ce4b9f1fa7db68097",
	        "Created": "2023-07-17T20:22:36.867529002Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 612034,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-07-17T20:22:37.155338229Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:11589cdc9ef4b67a64cc243dd3cf013e81ad02bbed105fc37dc07aa272044680",
	        "ResolvConfPath": "/var/lib/docker/containers/aaf2ef3ea016b9eb03df758859ec5ab8e5706a78ea89499ce4b9f1fa7db68097/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/aaf2ef3ea016b9eb03df758859ec5ab8e5706a78ea89499ce4b9f1fa7db68097/hostname",
	        "HostsPath": "/var/lib/docker/containers/aaf2ef3ea016b9eb03df758859ec5ab8e5706a78ea89499ce4b9f1fa7db68097/hosts",
	        "LogPath": "/var/lib/docker/containers/aaf2ef3ea016b9eb03df758859ec5ab8e5706a78ea89499ce4b9f1fa7db68097/aaf2ef3ea016b9eb03df758859ec5ab8e5706a78ea89499ce4b9f1fa7db68097-json.log",
	        "Name": "/running-upgrade-414000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "running-upgrade-414000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/e09c801866cf06c7951c77873cda21d6ff5d18261144f135f13b0e6ab71f88ff-init/diff:/var/lib/docker/overlay2/b092b3b5a7542fef481e5f60a76693b4dba611ccd25e5f4b7e2ad92e85e41bfd/diff:/var/lib/docker/overlay2/edc22f6d72adbe2294de0d8035449760185bc55bde93b2ac5045b1525989ce6f/diff:/var/lib/docker/overlay2/0fd4f6653596f2c165d2881387c8d2820322bc692e1d0e72dcfa878409d0d793/diff:/var/lib/docker/overlay2/be52d2cad56808f531863336531c4c9560737122e8ff4972b0850085bcf6d7d3/diff:/var/lib/docker/overlay2/2f96e757a559e43114212d52aa90b5b5d6f60dd0041ad53c3d54ad5ff0e5e31d/diff:/var/lib/docker/overlay2/5692384f9e4c7573deebe55fff002cca1f52dba8a44609746ee58e4fd07b37d1/diff:/var/lib/docker/overlay2/3329991389a0b381baa38445ad43709e269a37b065240fb1056e54f120662219/diff:/var/lib/docker/overlay2/e49e4276d70ba4816de90a4fbcf888f1eebee1d7cbcb7f86607e75197fbc0b4b/diff:/var/lib/docker/overlay2/4fbf7baebf2866b65f86dfeb4ac76d905e0d918cc57454a2113ebcf81b150abc/diff:/var/lib/docker/overlay2/2666a1
36a8ebce5cb9f8d8c18104273503e26d4150a0ff14295c7dc7e4d62487/diff:/var/lib/docker/overlay2/11947e02eaf4e109c4b6aa1b599e5699c5cff8c5b3694358680af2c2d2f8a63d/diff:/var/lib/docker/overlay2/785d07e5c82f6290d3f36a262e695bc299cbe4918f0a4f3b5758b9e266b7297d/diff:/var/lib/docker/overlay2/fd250aa52b12fc4f37cb44ba3a509c4194798df2d97476391d64c2653f52a87d/diff:/var/lib/docker/overlay2/4144a2900350bef3c5d08f14c9574e43eed5d7fa3e365129e9bfad041f08ad25/diff:/var/lib/docker/overlay2/6e72e826814d1f6895446f01486c733272f29711a4fedf035a56a9769d641069/diff:/var/lib/docker/overlay2/bf1baab05184c9401fc3ca5f76f4ee9d5d369b950163a7bbf0e115502b8bf1fc/diff:/var/lib/docker/overlay2/9ec49f7f622e21fb7909bf83b68516f8b576b5927e994135445f15018b4f5ee8/diff:/var/lib/docker/overlay2/752394829c5b2c0a88dd4a6a6376e359c9586d01e7596527aa3e500bb8445423/diff:/var/lib/docker/overlay2/298a166df8d125594292e4afaeaa6605b7bc7109661bd12f90c890d70fc1ad45/diff:/var/lib/docker/overlay2/5643fb07bc7d31723a33e67e3ee6b942ba4b66fcee0aab4b6625fd26ec67f208/diff:/var/lib/d
ocker/overlay2/e28bb68e5a055c51f3ecd0023e92024e8c9f07a69bc114f8af4fc2aa20e8ff1a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e09c801866cf06c7951c77873cda21d6ff5d18261144f135f13b0e6ab71f88ff/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e09c801866cf06c7951c77873cda21d6ff5d18261144f135f13b0e6ab71f88ff/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e09c801866cf06c7951c77873cda21d6ff5d18261144f135f13b0e6ab71f88ff/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "running-upgrade-414000",
	                "Source": "/var/lib/docker/volumes/running-upgrade-414000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "running-upgrade-414000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	                "container=docker"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "running-upgrade-414000",
	                "name.minikube.sigs.k8s.io": "running-upgrade-414000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "02b12647e1fd88cf303ff3cdb3c685ada692e0993ba7ba6b78686b72f028dfc9",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57368"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57369"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57370"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/02b12647e1fd",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "1643da8731bca686e907742e10af0ffc0985ea0b9f8077b7d180220355fa9cf2",
	            "Gateway": "172.17.0.1",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "172.17.0.2",
	            "IPPrefixLen": 16,
	            "IPv6Gateway": "",
	            "MacAddress": "02:42:ac:11:00:02",
	            "Networks": {
	                "bridge": {
	                    "IPAMConfig": null,
	                    "Links": null,
	                    "Aliases": null,
	                    "NetworkID": "3361ceefc20ba5c0ef4fd4641e089b49ab7465471b3fe0fa1db408704e009093",
	                    "EndpointID": "1643da8731bca686e907742e10af0ffc0985ea0b9f8077b7d180220355fa9cf2",
	                    "Gateway": "172.17.0.1",
	                    "IPAddress": "172.17.0.2",
	                    "IPPrefixLen": 16,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:ac:11:00:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p running-upgrade-414000 -n running-upgrade-414000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p running-upgrade-414000 -n running-upgrade-414000: exit status 6 (343.416483ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 13:22:55.495505   47433 status.go:415] kubeconfig endpoint: extract IP: "running-upgrade-414000" does not appear in /Users/jenkins/minikube-integration/16890-37879/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "running-upgrade-414000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "running-upgrade-414000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p running-upgrade-414000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p running-upgrade-414000: (2.193304842s)
--- FAIL: TestRunningBinaryUpgrade (64.92s)

                                                
                                    
x
+
TestKubernetesUpgrade (571.66s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:234: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-530000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker 
E0717 13:24:32.009687   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/skaffold-508000/client.crt: no such file or directory
E0717 13:24:32.016041   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/skaffold-508000/client.crt: no such file or directory
E0717 13:24:32.026409   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/skaffold-508000/client.crt: no such file or directory
E0717 13:24:32.046902   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/skaffold-508000/client.crt: no such file or directory
E0717 13:24:32.087248   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/skaffold-508000/client.crt: no such file or directory
E0717 13:24:32.169337   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/skaffold-508000/client.crt: no such file or directory
E0717 13:24:32.329793   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/skaffold-508000/client.crt: no such file or directory
E0717 13:24:32.650742   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/skaffold-508000/client.crt: no such file or directory
E0717 13:24:33.292234   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/skaffold-508000/client.crt: no such file or directory
E0717 13:24:34.574496   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/skaffold-508000/client.crt: no such file or directory
E0717 13:24:37.135416   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/skaffold-508000/client.crt: no such file or directory
version_upgrade_test.go:234: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kubernetes-upgrade-530000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker : exit status 109 (4m15.851993753s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-530000] minikube v1.30.1 on Darwin 13.4.1
	  - MINIKUBE_LOCATION=16890
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16890-37879/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16890-37879/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node kubernetes-upgrade-530000 in cluster kubernetes-upgrade-530000
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* Preparing Kubernetes v1.16.0 on Docker 24.0.4 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 13:23:51.649124   47799 out.go:296] Setting OutFile to fd 1 ...
	I0717 13:23:51.649294   47799 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 13:23:51.649300   47799 out.go:309] Setting ErrFile to fd 2...
	I0717 13:23:51.649305   47799 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 13:23:51.649482   47799 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/16890-37879/.minikube/bin
	I0717 13:23:51.650867   47799 out.go:303] Setting JSON to false
	I0717 13:23:51.669927   47799 start.go:128] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":15802,"bootTime":1689609629,"procs":401,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0717 13:23:51.670014   47799 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0717 13:23:51.690968   47799 out.go:177] * [kubernetes-upgrade-530000] minikube v1.30.1 on Darwin 13.4.1
	I0717 13:23:51.734097   47799 out.go:177]   - MINIKUBE_LOCATION=16890
	I0717 13:23:51.734124   47799 notify.go:220] Checking for updates...
	I0717 13:23:51.775892   47799 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16890-37879/kubeconfig
	I0717 13:23:51.796825   47799 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0717 13:23:51.817947   47799 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 13:23:51.839258   47799 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16890-37879/.minikube
	I0717 13:23:51.860001   47799 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 13:23:51.881367   47799 config.go:182] Loaded profile config "cert-expiration-533000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0717 13:23:51.881458   47799 driver.go:373] Setting default libvirt URI to qemu:///system
	I0717 13:23:51.935831   47799 docker.go:121] docker version: linux-24.0.2:Docker Desktop 4.21.1 (114176)
	I0717 13:23:51.935947   47799 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 13:23:52.034404   47799 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:72 OomKillDisable:false NGoroutines:68 SystemTime:2023-07-17 20:23:52.022722585 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:5.15.49-linuxkit-pr OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexS
erverAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=built
in name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Pat
h:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.6] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Command line tool for Docker Scout Vendor:Docker Inc. Version:0.16.1]] Warnings:<n
il>}}
	I0717 13:23:52.055773   47799 out.go:177] * Using the docker driver based on user configuration
	I0717 13:23:52.076738   47799 start.go:298] selected driver: docker
	I0717 13:23:52.076763   47799 start.go:880] validating driver "docker" against <nil>
	I0717 13:23:52.076782   47799 start.go:891] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 13:23:52.081090   47799 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 13:23:52.180265   47799 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:72 OomKillDisable:false NGoroutines:68 SystemTime:2023-07-17 20:23:52.169215518 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:5.15.49-linuxkit-pr OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexS
erverAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=built
in name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Pat
h:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.6] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Command line tool for Docker Scout Vendor:Docker Inc. Version:0.16.1]] Warnings:<n
il>}}
	I0717 13:23:52.180441   47799 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0717 13:23:52.180634   47799 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
	I0717 13:23:52.202048   47799 out.go:177] * Using Docker Desktop driver with root privileges
	I0717 13:23:52.225023   47799 cni.go:84] Creating CNI manager for ""
	I0717 13:23:52.225060   47799 cni.go:168] CNI unnecessary in this configuration, recommending no CNI
	I0717 13:23:52.225077   47799 start_flags.go:319] config:
	{Name:kubernetes-upgrade-530000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-530000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker C
RISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 13:23:52.267593   47799 out.go:177] * Starting control plane node kubernetes-upgrade-530000 in cluster kubernetes-upgrade-530000
	I0717 13:23:52.288930   47799 cache.go:122] Beginning downloading kic base image for docker with docker
	I0717 13:23:52.309873   47799 out.go:177] * Pulling base image ...
	I0717 13:23:52.368013   47799 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0717 13:23:52.368012   47799 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon
	I0717 13:23:52.368129   47799 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16890-37879/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0717 13:23:52.368148   47799 cache.go:57] Caching tarball of preloaded images
	I0717 13:23:52.368401   47799 preload.go:174] Found /Users/jenkins/minikube-integration/16890-37879/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0717 13:23:52.368426   47799 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0717 13:23:52.369505   47799 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/kubernetes-upgrade-530000/config.json ...
	I0717 13:23:52.369716   47799 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/kubernetes-upgrade-530000/config.json: {Name:mk1b0b5f52ca0b35e40d43804367a5360d2e47da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 13:23:52.418406   47799 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon, skipping pull
	I0717 13:23:52.418426   47799 cache.go:145] gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 exists in daemon, skipping load
	I0717 13:23:52.418442   47799 cache.go:195] Successfully downloaded all kic artifacts
	I0717 13:23:52.418476   47799 start.go:365] acquiring machines lock for kubernetes-upgrade-530000: {Name:mkbe9437d0d825391adf38aab096652c1218f697 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 13:23:52.418634   47799 start.go:369] acquired machines lock for "kubernetes-upgrade-530000" in 146.126µs
	I0717 13:23:52.418662   47799 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-530000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-530000 Namespace:default APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: S
taticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0717 13:23:52.418747   47799 start.go:125] createHost starting for "" (driver="docker")
	I0717 13:23:52.440001   47799 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0717 13:23:52.440322   47799 start.go:159] libmachine.API.Create for "kubernetes-upgrade-530000" (driver="docker")
	I0717 13:23:52.440356   47799 client.go:168] LocalClient.Create starting
	I0717 13:23:52.440497   47799 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16890-37879/.minikube/certs/ca.pem
	I0717 13:23:52.440543   47799 main.go:141] libmachine: Decoding PEM data...
	I0717 13:23:52.440575   47799 main.go:141] libmachine: Parsing certificate...
	I0717 13:23:52.440671   47799 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16890-37879/.minikube/certs/cert.pem
	I0717 13:23:52.440708   47799 main.go:141] libmachine: Decoding PEM data...
	I0717 13:23:52.440720   47799 main.go:141] libmachine: Parsing certificate...
	I0717 13:23:52.477557   47799 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-530000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0717 13:23:52.526811   47799 cli_runner.go:211] docker network inspect kubernetes-upgrade-530000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0717 13:23:52.526916   47799 network_create.go:281] running [docker network inspect kubernetes-upgrade-530000] to gather additional debugging logs...
	I0717 13:23:52.526936   47799 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-530000
	W0717 13:23:52.575326   47799 cli_runner.go:211] docker network inspect kubernetes-upgrade-530000 returned with exit code 1
	I0717 13:23:52.575356   47799 network_create.go:284] error running [docker network inspect kubernetes-upgrade-530000]: docker network inspect kubernetes-upgrade-530000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network kubernetes-upgrade-530000 not found
	I0717 13:23:52.575375   47799 network_create.go:286] output of [docker network inspect kubernetes-upgrade-530000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network kubernetes-upgrade-530000 not found
	
	** /stderr **
	I0717 13:23:52.575483   47799 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0717 13:23:52.625632   47799 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0717 13:23:52.626009   47799 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000e5a480}
	I0717 13:23:52.626025   47799 network_create.go:123] attempt to create docker network kubernetes-upgrade-530000 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 65535 ...
	I0717 13:23:52.626093   47799 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-530000 kubernetes-upgrade-530000
	W0717 13:23:52.674950   47799 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-530000 kubernetes-upgrade-530000 returned with exit code 1
	W0717 13:23:52.674987   47799 network_create.go:148] failed to create docker network kubernetes-upgrade-530000 192.168.58.0/24 with gateway 192.168.58.1 and mtu of 65535: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-530000 kubernetes-upgrade-530000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0717 13:23:52.675004   47799 network_create.go:115] failed to create docker network kubernetes-upgrade-530000 192.168.58.0/24, will retry: subnet is taken
	I0717 13:23:52.676552   47799 network.go:212] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0717 13:23:52.676871   47799 network.go:209] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000f26b80}
	I0717 13:23:52.676884   47799 network_create.go:123] attempt to create docker network kubernetes-upgrade-530000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 65535 ...
	I0717 13:23:52.676957   47799 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-530000 kubernetes-upgrade-530000
	W0717 13:23:52.725425   47799 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-530000 kubernetes-upgrade-530000 returned with exit code 1
	W0717 13:23:52.725473   47799 network_create.go:148] failed to create docker network kubernetes-upgrade-530000 192.168.67.0/24 with gateway 192.168.67.1 and mtu of 65535: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-530000 kubernetes-upgrade-530000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0717 13:23:52.725489   47799 network_create.go:115] failed to create docker network kubernetes-upgrade-530000 192.168.67.0/24, will retry: subnet is taken
	I0717 13:23:52.726816   47799 network.go:212] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0717 13:23:52.727155   47799 network.go:209] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000f27ca0}
	I0717 13:23:52.727170   47799 network_create.go:123] attempt to create docker network kubernetes-upgrade-530000 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 65535 ...
	I0717 13:23:52.727233   47799 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-530000 kubernetes-upgrade-530000
	I0717 13:23:52.809333   47799 network_create.go:107] docker network kubernetes-upgrade-530000 192.168.76.0/24 created
	I0717 13:23:52.809370   47799 kic.go:117] calculated static IP "192.168.76.2" for the "kubernetes-upgrade-530000" container
	I0717 13:23:52.809480   47799 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0717 13:23:52.858970   47799 cli_runner.go:164] Run: docker volume create kubernetes-upgrade-530000 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-530000 --label created_by.minikube.sigs.k8s.io=true
	I0717 13:23:52.909004   47799 oci.go:103] Successfully created a docker volume kubernetes-upgrade-530000
	I0717 13:23:52.909124   47799 cli_runner.go:164] Run: docker run --rm --name kubernetes-upgrade-530000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-530000 --entrypoint /usr/bin/test -v kubernetes-upgrade-530000:/var gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -d /var/lib
	I0717 13:23:53.305090   47799 oci.go:107] Successfully prepared a docker volume kubernetes-upgrade-530000
	I0717 13:23:53.305135   47799 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0717 13:23:53.305149   47799 kic.go:190] Starting extracting preloaded images to volume ...
	I0717 13:23:53.305262   47799 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/16890-37879/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-530000:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -I lz4 -xf /preloaded.tar -C /extractDir
	I0717 13:23:56.074749   47799 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/16890-37879/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-530000:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -I lz4 -xf /preloaded.tar -C /extractDir: (2.768856114s)
	I0717 13:23:56.074783   47799 kic.go:199] duration metric: took 2.769043 seconds to extract preloaded images to volume
	I0717 13:23:56.074889   47799 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0717 13:23:56.171242   47799 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kubernetes-upgrade-530000 --name kubernetes-upgrade-530000 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-530000 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kubernetes-upgrade-530000 --network kubernetes-upgrade-530000 --ip 192.168.76.2 --volume kubernetes-upgrade-530000:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631
	I0717 13:23:56.430119   47799 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-530000 --format={{.State.Running}}
	I0717 13:23:56.482143   47799 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-530000 --format={{.State.Status}}
	I0717 13:23:56.534426   47799 cli_runner.go:164] Run: docker exec kubernetes-upgrade-530000 stat /var/lib/dpkg/alternatives/iptables
	I0717 13:23:56.633883   47799 oci.go:144] the created container "kubernetes-upgrade-530000" has a running status.
	I0717 13:23:56.633917   47799 kic.go:221] Creating ssh key for kic: /Users/jenkins/minikube-integration/16890-37879/.minikube/machines/kubernetes-upgrade-530000/id_rsa...
	I0717 13:23:56.851751   47799 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/16890-37879/.minikube/machines/kubernetes-upgrade-530000/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0717 13:23:56.911610   47799 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-530000 --format={{.State.Status}}
	I0717 13:23:57.048441   47799 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0717 13:23:57.048471   47799 kic_runner.go:114] Args: [docker exec --privileged kubernetes-upgrade-530000 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0717 13:23:57.131718   47799 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-530000 --format={{.State.Status}}
	I0717 13:23:57.181234   47799 machine.go:88] provisioning docker machine ...
	I0717 13:23:57.181279   47799 ubuntu.go:169] provisioning hostname "kubernetes-upgrade-530000"
	I0717 13:23:57.181384   47799 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-530000
	I0717 13:23:57.230668   47799 main.go:141] libmachine: Using SSH client type: native
	I0717 13:23:57.231052   47799 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140d2c0] 0x1410360 <nil>  [] 0s} 127.0.0.1 57479 <nil> <nil>}
	I0717 13:23:57.231068   47799 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-530000 && echo "kubernetes-upgrade-530000" | sudo tee /etc/hostname
	I0717 13:23:57.369198   47799 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-530000
	
	I0717 13:23:57.369296   47799 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-530000
	I0717 13:23:57.419209   47799 main.go:141] libmachine: Using SSH client type: native
	I0717 13:23:57.419562   47799 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140d2c0] 0x1410360 <nil>  [] 0s} 127.0.0.1 57479 <nil> <nil>}
	I0717 13:23:57.419577   47799 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-530000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-530000/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-530000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 13:23:57.547841   47799 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 13:23:57.547862   47799 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/16890-37879/.minikube CaCertPath:/Users/jenkins/minikube-integration/16890-37879/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/16890-37879/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/16890-37879/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/16890-37879/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/16890-37879/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/16890-37879/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/16890-37879/.minikube}
	I0717 13:23:57.547884   47799 ubuntu.go:177] setting up certificates
	I0717 13:23:57.547894   47799 provision.go:83] configureAuth start
	I0717 13:23:57.547978   47799 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-530000
	I0717 13:23:57.597446   47799 provision.go:138] copyHostCerts
	I0717 13:23:57.597543   47799 exec_runner.go:144] found /Users/jenkins/minikube-integration/16890-37879/.minikube/ca.pem, removing ...
	I0717 13:23:57.597551   47799 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/16890-37879/.minikube/ca.pem
	I0717 13:23:57.597660   47799 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16890-37879/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/16890-37879/.minikube/ca.pem (1078 bytes)
	I0717 13:23:57.597895   47799 exec_runner.go:144] found /Users/jenkins/minikube-integration/16890-37879/.minikube/cert.pem, removing ...
	I0717 13:23:57.597902   47799 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/16890-37879/.minikube/cert.pem
	I0717 13:23:57.597978   47799 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16890-37879/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/16890-37879/.minikube/cert.pem (1123 bytes)
	I0717 13:23:57.598157   47799 exec_runner.go:144] found /Users/jenkins/minikube-integration/16890-37879/.minikube/key.pem, removing ...
	I0717 13:23:57.598163   47799 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/16890-37879/.minikube/key.pem
	I0717 13:23:57.598221   47799 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16890-37879/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/16890-37879/.minikube/key.pem (1679 bytes)
	I0717 13:23:57.598346   47799 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/16890-37879/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/16890-37879/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/16890-37879/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-530000 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube kubernetes-upgrade-530000]
	I0717 13:23:57.852726   47799 provision.go:172] copyRemoteCerts
	I0717 13:23:57.852800   47799 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 13:23:57.852854   47799 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-530000
	I0717 13:23:57.902928   47799 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57479 SSHKeyPath:/Users/jenkins/minikube-integration/16890-37879/.minikube/machines/kubernetes-upgrade-530000/id_rsa Username:docker}
	I0717 13:23:57.996848   47799 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16890-37879/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 13:23:58.018646   47799 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16890-37879/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0717 13:23:58.039658   47799 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16890-37879/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0717 13:23:58.060689   47799 provision.go:86] duration metric: configureAuth took 512.680183ms
	I0717 13:23:58.060710   47799 ubuntu.go:193] setting minikube options for container-runtime
	I0717 13:23:58.060860   47799 config.go:182] Loaded profile config "kubernetes-upgrade-530000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0717 13:23:58.060937   47799 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-530000
	I0717 13:23:58.110712   47799 main.go:141] libmachine: Using SSH client type: native
	I0717 13:23:58.111072   47799 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140d2c0] 0x1410360 <nil>  [] 0s} 127.0.0.1 57479 <nil> <nil>}
	I0717 13:23:58.111091   47799 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0717 13:23:58.238685   47799 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0717 13:23:58.238702   47799 ubuntu.go:71] root file system type: overlay
	I0717 13:23:58.238816   47799 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0717 13:23:58.238897   47799 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-530000
	I0717 13:23:58.289076   47799 main.go:141] libmachine: Using SSH client type: native
	I0717 13:23:58.289425   47799 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140d2c0] 0x1410360 <nil>  [] 0s} 127.0.0.1 57479 <nil> <nil>}
	I0717 13:23:58.289478   47799 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0717 13:23:58.424801   47799 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0717 13:23:58.424894   47799 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-530000
	I0717 13:23:58.473870   47799 main.go:141] libmachine: Using SSH client type: native
	I0717 13:23:58.474216   47799 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140d2c0] 0x1410360 <nil>  [] 0s} 127.0.0.1 57479 <nil> <nil>}
	I0717 13:23:58.474229   47799 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0717 13:23:59.139553   47799 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2023-07-07 14:50:55.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-07-17 20:23:58.420097913 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0717 13:23:59.139584   47799 machine.go:91] provisioned docker machine in 1.957995786s
	I0717 13:23:59.139592   47799 client.go:171] LocalClient.Create took 6.697900958s
	I0717 13:23:59.139610   47799 start.go:167] duration metric: libmachine.API.Create for "kubernetes-upgrade-530000" took 6.697960517s
	I0717 13:23:59.139621   47799 start.go:300] post-start starting for "kubernetes-upgrade-530000" (driver="docker")
	I0717 13:23:59.139631   47799 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 13:23:59.139716   47799 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 13:23:59.139781   47799 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-530000
	I0717 13:23:59.190367   47799 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57479 SSHKeyPath:/Users/jenkins/minikube-integration/16890-37879/.minikube/machines/kubernetes-upgrade-530000/id_rsa Username:docker}
	I0717 13:23:59.282973   47799 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 13:23:59.287091   47799 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0717 13:23:59.287117   47799 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0717 13:23:59.287125   47799 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0717 13:23:59.287130   47799 info.go:137] Remote host: Ubuntu 22.04.2 LTS
	I0717 13:23:59.287137   47799 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16890-37879/.minikube/addons for local assets ...
	I0717 13:23:59.287218   47799 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16890-37879/.minikube/files for local assets ...
	I0717 13:23:59.287402   47799 filesync.go:149] local asset: /Users/jenkins/minikube-integration/16890-37879/.minikube/files/etc/ssl/certs/383252.pem -> 383252.pem in /etc/ssl/certs
	I0717 13:23:59.287588   47799 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 13:23:59.296076   47799 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16890-37879/.minikube/files/etc/ssl/certs/383252.pem --> /etc/ssl/certs/383252.pem (1708 bytes)
	I0717 13:23:59.317148   47799 start.go:303] post-start completed in 177.483404ms
	I0717 13:23:59.317638   47799 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-530000
	I0717 13:23:59.367235   47799 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/kubernetes-upgrade-530000/config.json ...
	I0717 13:23:59.367644   47799 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 13:23:59.367705   47799 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-530000
	I0717 13:23:59.416939   47799 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57479 SSHKeyPath:/Users/jenkins/minikube-integration/16890-37879/.minikube/machines/kubernetes-upgrade-530000/id_rsa Username:docker}
	I0717 13:23:59.505354   47799 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0717 13:23:59.510815   47799 start.go:128] duration metric: createHost completed in 7.090663115s
	I0717 13:23:59.510851   47799 start.go:83] releasing machines lock for "kubernetes-upgrade-530000", held for 7.090810484s
	I0717 13:23:59.510938   47799 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-530000
	I0717 13:23:59.560185   47799 ssh_runner.go:195] Run: cat /version.json
	I0717 13:23:59.560197   47799 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 13:23:59.560268   47799 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-530000
	I0717 13:23:59.560275   47799 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-530000
	I0717 13:23:59.612885   47799 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57479 SSHKeyPath:/Users/jenkins/minikube-integration/16890-37879/.minikube/machines/kubernetes-upgrade-530000/id_rsa Username:docker}
	I0717 13:23:59.612893   47799 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57479 SSHKeyPath:/Users/jenkins/minikube-integration/16890-37879/.minikube/machines/kubernetes-upgrade-530000/id_rsa Username:docker}
	W0717 13:23:59.702697   47799 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.31.0 -> Actual minikube version: v1.30.1
	! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.31.0 -> Actual minikube version: v1.30.1
	I0717 13:23:59.702781   47799 ssh_runner.go:195] Run: systemctl --version
	I0717 13:23:59.826161   47799 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0717 13:23:59.831938   47799 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0717 13:23:59.854834   47799 cni.go:236] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0717 13:23:59.854910   47799 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0717 13:23:59.870676   47799 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0717 13:23:59.886501   47799 cni.go:314] configured [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 13:23:59.886517   47799 start.go:469] detecting cgroup driver to use...
	I0717 13:23:59.886530   47799 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0717 13:23:59.886638   47799 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 13:23:59.902159   47799 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.1"|' /etc/containerd/config.toml"
	I0717 13:23:59.912032   47799 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0717 13:23:59.921802   47799 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0717 13:23:59.921862   47799 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0717 13:23:59.931866   47799 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0717 13:23:59.941917   47799 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0717 13:23:59.951719   47799 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0717 13:23:59.961298   47799 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 13:23:59.970379   47799 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0717 13:23:59.980173   47799 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 13:23:59.989026   47799 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 13:23:59.997445   47799 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 13:24:00.065989   47799 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0717 13:24:00.134555   47799 start.go:469] detecting cgroup driver to use...
	I0717 13:24:00.134578   47799 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0717 13:24:00.134661   47799 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0717 13:24:00.146246   47799 cruntime.go:276] skipping containerd shutdown because we are bound to it
	I0717 13:24:00.146317   47799 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0717 13:24:00.157857   47799 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 13:24:00.174609   47799 ssh_runner.go:195] Run: which cri-dockerd
	I0717 13:24:00.179104   47799 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0717 13:24:00.201531   47799 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0717 13:24:00.218856   47799 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0717 13:24:00.314806   47799 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0717 13:24:00.398546   47799 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0717 13:24:00.398570   47799 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0717 13:24:00.415602   47799 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 13:24:00.489289   47799 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0717 13:24:00.721933   47799 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0717 13:24:00.748084   47799 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0717 13:24:00.817944   47799 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 24.0.4 ...
	I0717 13:24:00.818081   47799 cli_runner.go:164] Run: docker exec -t kubernetes-upgrade-530000 dig +short host.docker.internal
	I0717 13:24:00.934432   47799 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I0717 13:24:00.934553   47799 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I0717 13:24:00.939553   47799 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 13:24:00.950704   47799 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-530000
	I0717 13:24:01.000635   47799 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0717 13:24:01.000720   47799 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0717 13:24:01.021838   47799 docker.go:636] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0717 13:24:01.021855   47799 docker.go:642] registry.k8s.io/kube-apiserver:v1.16.0 wasn't preloaded
	I0717 13:24:01.021910   47799 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0717 13:24:01.031042   47799 ssh_runner.go:195] Run: which lz4
	I0717 13:24:01.035213   47799 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0717 13:24:01.039370   47799 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 13:24:01.039394   47799 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16890-37879/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (369789069 bytes)
	I0717 13:24:05.968000   47799 docker.go:600] Took 4.932258 seconds to copy over tarball
	I0717 13:24:05.968091   47799 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 13:24:08.053010   47799 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.084691043s)
	I0717 13:24:08.053028   47799 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 13:24:08.249319   47799 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0717 13:24:08.258142   47799 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2499 bytes)
	I0717 13:24:08.274492   47799 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 13:24:08.344575   47799 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0717 13:24:08.808056   47799 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0717 13:24:08.830045   47799 docker.go:636] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0717 13:24:08.830061   47799 docker.go:642] registry.k8s.io/kube-apiserver:v1.16.0 wasn't preloaded
	I0717 13:24:08.830069   47799 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0717 13:24:08.835845   47799 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0717 13:24:08.835849   47799 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0717 13:24:08.835909   47799 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0717 13:24:08.837277   47799 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0717 13:24:08.837320   47799 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0717 13:24:08.837324   47799 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 13:24:08.837445   47799 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0717 13:24:08.837525   47799 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0717 13:24:08.841764   47799 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0717 13:24:08.841864   47799 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0717 13:24:08.842230   47799 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0717 13:24:08.845749   47799 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0717 13:24:08.845749   47799 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0717 13:24:08.845796   47799 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 13:24:08.845789   47799 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0717 13:24:08.845756   47799 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0717 13:24:09.968981   47799 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0717 13:24:09.989920   47799 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0717 13:24:09.989972   47799 docker.go:316] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0717 13:24:09.990032   47799 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.16.0
	I0717 13:24:10.009929   47799 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/16890-37879/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0717 13:24:10.165978   47799 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0717 13:24:10.185844   47799 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0717 13:24:10.185868   47799 docker.go:316] Removing image: registry.k8s.io/coredns:1.6.2
	I0717 13:24:10.185926   47799 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.6.2
	I0717 13:24:10.207141   47799 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/16890-37879/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0717 13:24:10.337559   47799 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0717 13:24:10.358559   47799 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0717 13:24:10.358583   47799 docker.go:316] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0717 13:24:10.358643   47799 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0717 13:24:10.379180   47799 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/16890-37879/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0717 13:24:10.411313   47799 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0717 13:24:10.431515   47799 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0717 13:24:10.431557   47799 docker.go:316] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0717 13:24:10.431609   47799 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0717 13:24:10.450876   47799 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/16890-37879/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0717 13:24:10.602285   47799 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0717 13:24:10.622492   47799 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0717 13:24:10.622528   47799 docker.go:316] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0717 13:24:10.622586   47799 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.3.15-0
	I0717 13:24:10.644672   47799 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/16890-37879/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0717 13:24:11.150699   47799 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0717 13:24:11.171474   47799 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0717 13:24:11.171507   47799 docker.go:316] Removing image: registry.k8s.io/pause:3.1
	I0717 13:24:11.171603   47799 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.1
	I0717 13:24:11.190468   47799 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/16890-37879/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0717 13:24:11.432547   47799 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0717 13:24:11.453567   47799 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 13:24:11.453769   47799 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0717 13:24:11.453791   47799 docker.go:316] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0717 13:24:11.453832   47799 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0717 13:24:11.473742   47799 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/16890-37879/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0717 13:24:11.473798   47799 cache_images.go:92] LoadImages completed in 2.64350886s
	W0717 13:24:11.473844   47799 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/16890-37879/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0: no such file or directory
	X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/16890-37879/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0: no such file or directory
	I0717 13:24:11.473917   47799 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0717 13:24:11.523844   47799 cni.go:84] Creating CNI manager for ""
	I0717 13:24:11.523862   47799 cni.go:168] CNI unnecessary in this configuration, recommending no CNI
	I0717 13:24:11.523879   47799 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0717 13:24:11.523897   47799 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-530000 NodeName:kubernetes-upgrade-530000 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt S
taticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0717 13:24:11.523998   47799 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "kubernetes-upgrade-530000"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: kubernetes-upgrade-530000
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.76.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 13:24:11.524069   47799 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=kubernetes-upgrade-530000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-530000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0717 13:24:11.524149   47799 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0717 13:24:11.533076   47799 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 13:24:11.533133   47799 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 13:24:11.541899   47799 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (351 bytes)
	I0717 13:24:11.557933   47799 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 13:24:11.574257   47799 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2180 bytes)
	I0717 13:24:11.590836   47799 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0717 13:24:11.595160   47799 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 13:24:11.605921   47799 certs.go:56] Setting up /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/kubernetes-upgrade-530000 for IP: 192.168.76.2
	I0717 13:24:11.605940   47799 certs.go:190] acquiring lock for shared ca certs: {Name:mkcb761e9710dc67a00cbdee9d78e096db7e9bb9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 13:24:11.606127   47799 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/16890-37879/.minikube/ca.key
	I0717 13:24:11.606189   47799 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/16890-37879/.minikube/proxy-client-ca.key
	I0717 13:24:11.606230   47799 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/kubernetes-upgrade-530000/client.key
	I0717 13:24:11.606244   47799 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/kubernetes-upgrade-530000/client.crt with IP's: []
	I0717 13:24:11.691329   47799 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/kubernetes-upgrade-530000/client.crt ...
	I0717 13:24:11.691340   47799 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/kubernetes-upgrade-530000/client.crt: {Name:mk0c9f834979577585b5b076aa7fe37e63fa834c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 13:24:11.691646   47799 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/kubernetes-upgrade-530000/client.key ...
	I0717 13:24:11.691654   47799 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/kubernetes-upgrade-530000/client.key: {Name:mk59e682caa602054202598b2ab94881c4f32a05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 13:24:11.691872   47799 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/kubernetes-upgrade-530000/apiserver.key.31bdca25
	I0717 13:24:11.691888   47799 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/kubernetes-upgrade-530000/apiserver.crt.31bdca25 with IP's: [192.168.76.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0717 13:24:11.792796   47799 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/kubernetes-upgrade-530000/apiserver.crt.31bdca25 ...
	I0717 13:24:11.792806   47799 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/kubernetes-upgrade-530000/apiserver.crt.31bdca25: {Name:mk58babd612e826dffefc695a9b1399eda78b231 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 13:24:11.793041   47799 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/kubernetes-upgrade-530000/apiserver.key.31bdca25 ...
	I0717 13:24:11.793049   47799 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/kubernetes-upgrade-530000/apiserver.key.31bdca25: {Name:mk5f20dc2f21a4f1d0127c7caf03c7f39d033ef9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 13:24:11.793243   47799 certs.go:337] copying /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/kubernetes-upgrade-530000/apiserver.crt.31bdca25 -> /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/kubernetes-upgrade-530000/apiserver.crt
	I0717 13:24:11.793411   47799 certs.go:341] copying /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/kubernetes-upgrade-530000/apiserver.key.31bdca25 -> /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/kubernetes-upgrade-530000/apiserver.key
	I0717 13:24:11.793601   47799 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/kubernetes-upgrade-530000/proxy-client.key
	I0717 13:24:11.793617   47799 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/kubernetes-upgrade-530000/proxy-client.crt with IP's: []
	I0717 13:24:11.914068   47799 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/kubernetes-upgrade-530000/proxy-client.crt ...
	I0717 13:24:11.914088   47799 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/kubernetes-upgrade-530000/proxy-client.crt: {Name:mkf42724553cd6fbd98c8beac80c75a256e02c51 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 13:24:11.914428   47799 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/kubernetes-upgrade-530000/proxy-client.key ...
	I0717 13:24:11.914437   47799 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/kubernetes-upgrade-530000/proxy-client.key: {Name:mkbc9602bd3452dee00cdf3a9d5f0c6d91b99435 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 13:24:11.914849   47799 certs.go:437] found cert: /Users/jenkins/minikube-integration/16890-37879/.minikube/certs/Users/jenkins/minikube-integration/16890-37879/.minikube/certs/38325.pem (1338 bytes)
	W0717 13:24:11.914901   47799 certs.go:433] ignoring /Users/jenkins/minikube-integration/16890-37879/.minikube/certs/Users/jenkins/minikube-integration/16890-37879/.minikube/certs/38325_empty.pem, impossibly tiny 0 bytes
	I0717 13:24:11.914913   47799 certs.go:437] found cert: /Users/jenkins/minikube-integration/16890-37879/.minikube/certs/Users/jenkins/minikube-integration/16890-37879/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 13:24:11.914951   47799 certs.go:437] found cert: /Users/jenkins/minikube-integration/16890-37879/.minikube/certs/Users/jenkins/minikube-integration/16890-37879/.minikube/certs/ca.pem (1078 bytes)
	I0717 13:24:11.914988   47799 certs.go:437] found cert: /Users/jenkins/minikube-integration/16890-37879/.minikube/certs/Users/jenkins/minikube-integration/16890-37879/.minikube/certs/cert.pem (1123 bytes)
	I0717 13:24:11.915018   47799 certs.go:437] found cert: /Users/jenkins/minikube-integration/16890-37879/.minikube/certs/Users/jenkins/minikube-integration/16890-37879/.minikube/certs/key.pem (1679 bytes)
	I0717 13:24:11.915091   47799 certs.go:437] found cert: /Users/jenkins/minikube-integration/16890-37879/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/16890-37879/.minikube/files/etc/ssl/certs/383252.pem (1708 bytes)
	I0717 13:24:11.915647   47799 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/kubernetes-upgrade-530000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0717 13:24:11.938122   47799 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/kubernetes-upgrade-530000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 13:24:11.959466   47799 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/kubernetes-upgrade-530000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 13:24:11.980852   47799 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/kubernetes-upgrade-530000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0717 13:24:12.002528   47799 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16890-37879/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 13:24:12.024226   47799 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16890-37879/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 13:24:12.045671   47799 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16890-37879/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 13:24:12.068110   47799 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16890-37879/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0717 13:24:12.089646   47799 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16890-37879/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 13:24:12.111426   47799 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16890-37879/.minikube/certs/38325.pem --> /usr/share/ca-certificates/38325.pem (1338 bytes)
	I0717 13:24:12.132802   47799 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16890-37879/.minikube/files/etc/ssl/certs/383252.pem --> /usr/share/ca-certificates/383252.pem (1708 bytes)
	I0717 13:24:12.154210   47799 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 13:24:12.169937   47799 ssh_runner.go:195] Run: openssl version
	I0717 13:24:12.176346   47799 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/383252.pem && ln -fs /usr/share/ca-certificates/383252.pem /etc/ssl/certs/383252.pem"
	I0717 13:24:12.186342   47799 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/383252.pem
	I0717 13:24:12.190741   47799 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jul 17 19:49 /usr/share/ca-certificates/383252.pem
	I0717 13:24:12.190790   47799 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/383252.pem
	I0717 13:24:12.197512   47799 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/383252.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 13:24:12.207347   47799 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 13:24:12.216839   47799 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 13:24:12.221397   47799 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jul 17 19:44 /usr/share/ca-certificates/minikubeCA.pem
	I0717 13:24:12.221449   47799 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 13:24:12.228384   47799 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 13:24:12.238124   47799 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/38325.pem && ln -fs /usr/share/ca-certificates/38325.pem /etc/ssl/certs/38325.pem"
	I0717 13:24:12.247683   47799 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/38325.pem
	I0717 13:24:12.252087   47799 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jul 17 19:49 /usr/share/ca-certificates/38325.pem
	I0717 13:24:12.252153   47799 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/38325.pem
	I0717 13:24:12.259177   47799 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/38325.pem /etc/ssl/certs/51391683.0"
	I0717 13:24:12.268642   47799 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0717 13:24:12.273046   47799 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0717 13:24:12.273094   47799 kubeadm.go:404] StartCluster: {Name:kubernetes-upgrade-530000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-530000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0}
	I0717 13:24:12.273197   47799 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0717 13:24:12.292327   47799 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 13:24:12.301721   47799 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 13:24:12.311069   47799 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0717 13:24:12.311133   47799 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 13:24:12.320107   47799 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 13:24:12.320135   47799 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0717 13:24:12.369720   47799 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0717 13:24:12.369779   47799 kubeadm.go:322] [preflight] Running pre-flight checks
	I0717 13:24:12.614843   47799 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 13:24:12.614934   47799 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 13:24:12.615012   47799 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 13:24:12.791640   47799 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 13:24:12.792558   47799 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 13:24:12.798961   47799 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0717 13:24:12.873398   47799 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 13:24:12.917611   47799 out.go:204]   - Generating certificates and keys ...
	I0717 13:24:12.917727   47799 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0717 13:24:12.917822   47799 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0717 13:24:13.043722   47799 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0717 13:24:13.606239   47799 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0717 13:24:13.644330   47799 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0717 13:24:13.954759   47799 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0717 13:24:14.073097   47799 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0717 13:24:14.073233   47799 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-530000 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0717 13:24:14.174709   47799 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0717 13:24:14.174825   47799 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-530000 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0717 13:24:14.373957   47799 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0717 13:24:14.632604   47799 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0717 13:24:14.827573   47799 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0717 13:24:14.827897   47799 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 13:24:14.904931   47799 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 13:24:15.072314   47799 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 13:24:15.169241   47799 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 13:24:15.352314   47799 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 13:24:15.353037   47799 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 13:24:15.374669   47799 out.go:204]   - Booting up control plane ...
	I0717 13:24:15.374802   47799 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 13:24:15.374918   47799 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 13:24:15.374993   47799 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 13:24:15.375087   47799 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 13:24:15.375260   47799 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0717 13:24:55.362426   47799 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0717 13:24:55.362594   47799 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 13:24:55.362777   47799 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 13:25:00.364574   47799 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 13:25:00.364789   47799 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 13:25:10.365319   47799 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 13:25:10.365478   47799 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 13:25:30.366420   47799 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 13:25:30.366601   47799 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 13:26:10.367512   47799 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 13:26:10.367790   47799 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 13:26:10.367828   47799 kubeadm.go:322] 
	I0717 13:26:10.367882   47799 kubeadm.go:322] Unfortunately, an error has occurred:
	I0717 13:26:10.367950   47799 kubeadm.go:322] 	timed out waiting for the condition
	I0717 13:26:10.367960   47799 kubeadm.go:322] 
	I0717 13:26:10.367999   47799 kubeadm.go:322] This error is likely caused by:
	I0717 13:26:10.368032   47799 kubeadm.go:322] 	- The kubelet is not running
	I0717 13:26:10.368135   47799 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0717 13:26:10.368154   47799 kubeadm.go:322] 
	I0717 13:26:10.368260   47799 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0717 13:26:10.368289   47799 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0717 13:26:10.368324   47799 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0717 13:26:10.368331   47799 kubeadm.go:322] 
	I0717 13:26:10.368418   47799 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0717 13:26:10.368534   47799 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0717 13:26:10.368618   47799 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0717 13:26:10.368657   47799 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0717 13:26:10.368722   47799 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0717 13:26:10.368769   47799 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0717 13:26:10.370319   47799 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0717 13:26:10.370390   47799 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0717 13:26:10.370533   47799 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.4. Latest validated version: 18.09
	I0717 13:26:10.370613   47799 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 13:26:10.370701   47799 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0717 13:26:10.370770   47799 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	W0717 13:26:10.370844   47799 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-530000 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-530000 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.4. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-530000 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-530000 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.4. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0717 13:26:10.370883   47799 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0717 13:26:10.786612   47799 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 13:26:10.799656   47799 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0717 13:26:10.799721   47799 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 13:26:10.809058   47799 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 13:26:10.809087   47799 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0717 13:26:10.859459   47799 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0717 13:26:10.859506   47799 kubeadm.go:322] [preflight] Running pre-flight checks
	I0717 13:26:11.114232   47799 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 13:26:11.114348   47799 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 13:26:11.114463   47799 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 13:26:11.313646   47799 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 13:26:11.314443   47799 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 13:26:11.321199   47799 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0717 13:26:11.387219   47799 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 13:26:11.408441   47799 out.go:204]   - Generating certificates and keys ...
	I0717 13:26:11.408496   47799 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0717 13:26:11.408632   47799 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0717 13:26:11.408728   47799 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0717 13:26:11.408821   47799 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0717 13:26:11.408885   47799 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0717 13:26:11.408929   47799 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0717 13:26:11.408977   47799 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0717 13:26:11.409033   47799 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0717 13:26:11.409176   47799 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0717 13:26:11.409244   47799 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0717 13:26:11.409282   47799 kubeadm.go:322] [certs] Using the existing "sa" key
	I0717 13:26:11.409342   47799 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 13:26:11.456151   47799 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 13:26:11.612564   47799 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 13:26:11.708675   47799 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 13:26:11.846840   47799 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 13:26:11.846945   47799 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 13:26:11.867290   47799 out.go:204]   - Booting up control plane ...
	I0717 13:26:11.867384   47799 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 13:26:11.867448   47799 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 13:26:11.867516   47799 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 13:26:11.867647   47799 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 13:26:11.867844   47799 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0717 13:26:51.855071   47799 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0717 13:26:51.856047   47799 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 13:26:51.856297   47799 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 13:26:56.856829   47799 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 13:26:56.857026   47799 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 13:27:06.858044   47799 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 13:27:06.858235   47799 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 13:27:26.859136   47799 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 13:27:26.859278   47799 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 13:28:06.860178   47799 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 13:28:06.860341   47799 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 13:28:06.860349   47799 kubeadm.go:322] 
	I0717 13:28:06.860379   47799 kubeadm.go:322] Unfortunately, an error has occurred:
	I0717 13:28:06.860411   47799 kubeadm.go:322] 	timed out waiting for the condition
	I0717 13:28:06.860417   47799 kubeadm.go:322] 
	I0717 13:28:06.860438   47799 kubeadm.go:322] This error is likely caused by:
	I0717 13:28:06.860460   47799 kubeadm.go:322] 	- The kubelet is not running
	I0717 13:28:06.860562   47799 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0717 13:28:06.860575   47799 kubeadm.go:322] 
	I0717 13:28:06.860704   47799 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0717 13:28:06.860742   47799 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0717 13:28:06.860795   47799 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0717 13:28:06.860806   47799 kubeadm.go:322] 
	I0717 13:28:06.860895   47799 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0717 13:28:06.860980   47799 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0717 13:28:06.861059   47799 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0717 13:28:06.861105   47799 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0717 13:28:06.861161   47799 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0717 13:28:06.861193   47799 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0717 13:28:06.863561   47799 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0717 13:28:06.863633   47799 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0717 13:28:06.863735   47799 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.4. Latest validated version: 18.09
	I0717 13:28:06.863851   47799 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 13:28:06.863919   47799 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0717 13:28:06.863996   47799 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0717 13:28:06.864012   47799 kubeadm.go:406] StartCluster complete in 3m54.589277516s
	I0717 13:28:06.864103   47799 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 13:28:06.884624   47799 logs.go:284] 0 containers: []
	W0717 13:28:06.884637   47799 logs.go:286] No container was found matching "kube-apiserver"
	I0717 13:28:06.884711   47799 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 13:28:06.904847   47799 logs.go:284] 0 containers: []
	W0717 13:28:06.904860   47799 logs.go:286] No container was found matching "etcd"
	I0717 13:28:06.904918   47799 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 13:28:06.925138   47799 logs.go:284] 0 containers: []
	W0717 13:28:06.925153   47799 logs.go:286] No container was found matching "coredns"
	I0717 13:28:06.925223   47799 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 13:28:06.948207   47799 logs.go:284] 0 containers: []
	W0717 13:28:06.948223   47799 logs.go:286] No container was found matching "kube-scheduler"
	I0717 13:28:06.948317   47799 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 13:28:06.969614   47799 logs.go:284] 0 containers: []
	W0717 13:28:06.969631   47799 logs.go:286] No container was found matching "kube-proxy"
	I0717 13:28:06.969702   47799 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 13:28:06.989348   47799 logs.go:284] 0 containers: []
	W0717 13:28:06.989362   47799 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 13:28:06.989429   47799 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 13:28:07.009867   47799 logs.go:284] 0 containers: []
	W0717 13:28:07.009880   47799 logs.go:286] No container was found matching "kindnet"
	I0717 13:28:07.009892   47799 logs.go:123] Gathering logs for kubelet ...
	I0717 13:28:07.009898   47799 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 13:28:07.052513   47799 logs.go:123] Gathering logs for dmesg ...
	I0717 13:28:07.052533   47799 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 13:28:07.068938   47799 logs.go:123] Gathering logs for describe nodes ...
	I0717 13:28:07.068953   47799 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 13:28:07.127686   47799 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 13:28:07.127700   47799 logs.go:123] Gathering logs for Docker ...
	I0717 13:28:07.127708   47799 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 13:28:07.149896   47799 logs.go:123] Gathering logs for container status ...
	I0717 13:28:07.149914   47799 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0717 13:28:07.211332   47799 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.4. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0717 13:28:07.211361   47799 out.go:239] * 
	* 
	W0717 13:28:07.211444   47799 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.4. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.4. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0717 13:28:07.211483   47799 out.go:239] * 
	* 
	W0717 13:28:07.212357   47799 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 13:28:07.298094   47799 out.go:177] 
	W0717 13:28:07.356246   47799 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.4. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.4. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0717 13:28:07.356288   47799 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0717 13:28:07.356308   47799 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0717 13:28:07.398151   47799 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:236: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-amd64 start -p kubernetes-upgrade-530000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker : exit status 109
version_upgrade_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 stop -p kubernetes-upgrade-530000
version_upgrade_test.go:239: (dbg) Done: out/minikube-darwin-amd64 stop -p kubernetes-upgrade-530000: (1.635215135s)
version_upgrade_test.go:244: (dbg) Run:  out/minikube-darwin-amd64 -p kubernetes-upgrade-530000 status --format={{.Host}}
version_upgrade_test.go:244: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p kubernetes-upgrade-530000 status --format={{.Host}}: exit status 7 (91.822198ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:246: status error: exit status 7 (may be ok)
version_upgrade_test.go:255: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-530000 --memory=2200 --kubernetes-version=v1.27.3 --alsologtostderr -v=1 --driver=docker 
version_upgrade_test.go:255: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-530000 --memory=2200 --kubernetes-version=v1.27.3 --alsologtostderr -v=1 --driver=docker : (4m37.322064323s)
version_upgrade_test.go:260: (dbg) Run:  kubectl --context kubernetes-upgrade-530000 version --output=json
version_upgrade_test.go:279: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-530000 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker 
version_upgrade_test.go:281: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kubernetes-upgrade-530000 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker : exit status 106 (359.781162ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-530000] minikube v1.30.1 on Darwin 13.4.1
	  - MINIKUBE_LOCATION=16890
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16890-37879/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16890-37879/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.27.3 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-530000
	    minikube start -p kubernetes-upgrade-530000 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-5300002 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.27.3, by running:
	    
	    minikube start -p kubernetes-upgrade-530000 --kubernetes-version=v1.27.3
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:285: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:287: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-530000 --memory=2200 --kubernetes-version=v1.27.3 --alsologtostderr -v=1 --driver=docker 
version_upgrade_test.go:287: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-530000 --memory=2200 --kubernetes-version=v1.27.3 --alsologtostderr -v=1 --driver=docker : (30.060815439s)
version_upgrade_test.go:291: *** TestKubernetesUpgrade FAILED at 2023-07-17 13:33:17.036449 -0700 PDT m=+2981.598134588
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestKubernetesUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect kubernetes-upgrade-530000
helpers_test.go:235: (dbg) docker inspect kubernetes-upgrade-530000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "90e8d2380e74461329d68298e40921de43c367dd49003ce2e37fb02ed4603bdd",
	        "Created": "2023-07-17T20:23:56.217650498Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 641695,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-07-17T20:28:10.560781824Z",
	            "FinishedAt": "2023-07-17T20:28:08.015977536Z"
	        },
	        "Image": "sha256:c6cc01e6091959400f260dc442708e7c71630b58dab1f7c344cb00926bd84950",
	        "ResolvConfPath": "/var/lib/docker/containers/90e8d2380e74461329d68298e40921de43c367dd49003ce2e37fb02ed4603bdd/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/90e8d2380e74461329d68298e40921de43c367dd49003ce2e37fb02ed4603bdd/hostname",
	        "HostsPath": "/var/lib/docker/containers/90e8d2380e74461329d68298e40921de43c367dd49003ce2e37fb02ed4603bdd/hosts",
	        "LogPath": "/var/lib/docker/containers/90e8d2380e74461329d68298e40921de43c367dd49003ce2e37fb02ed4603bdd/90e8d2380e74461329d68298e40921de43c367dd49003ce2e37fb02ed4603bdd-json.log",
	        "Name": "/kubernetes-upgrade-530000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "kubernetes-upgrade-530000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "kubernetes-upgrade-530000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/e03c8deb96b2cf59f018d528706b43e1724c871716a2f353a87089b1317deedf-init/diff:/var/lib/docker/overlay2/e56ac82b253363a3e2a8ef1d32b035837a0160e70c091e0204df14a88b273cb0/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e03c8deb96b2cf59f018d528706b43e1724c871716a2f353a87089b1317deedf/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e03c8deb96b2cf59f018d528706b43e1724c871716a2f353a87089b1317deedf/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e03c8deb96b2cf59f018d528706b43e1724c871716a2f353a87089b1317deedf/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "kubernetes-upgrade-530000",
	                "Source": "/var/lib/docker/volumes/kubernetes-upgrade-530000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "kubernetes-upgrade-530000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "kubernetes-upgrade-530000",
	                "name.minikube.sigs.k8s.io": "kubernetes-upgrade-530000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "005957d44a6bf203d9a8809b4779b15137e0633532a6522e5f249a0cd1c566e8",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57742"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57743"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57744"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57745"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57741"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/005957d44a6b",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "kubernetes-upgrade-530000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "90e8d2380e74",
	                        "kubernetes-upgrade-530000"
	                    ],
	                    "NetworkID": "fb25cdd4919f5c6237850c220c180c7b3c901e148585c106aca707198ac5d0d1",
	                    "EndpointID": "5bbd52e33946a33f8cc883b0ff10ae4df6cdd9b1893c32dcd289153a87d493f5",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p kubernetes-upgrade-530000 -n kubernetes-upgrade-530000
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p kubernetes-upgrade-530000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p kubernetes-upgrade-530000 logs -n 25: (2.368110768s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |        Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p calico-859000 sudo                                | calico-859000         | jenkins | v1.30.1 | 17 Jul 23 13:32 PDT | 17 Jul 23 13:32 PDT |
	|         | systemctl status kubelet --all                       |                       |         |         |                     |                     |
	|         | --full --no-pager                                    |                       |         |         |                     |                     |
	| ssh     | -p calico-859000 sudo                                | calico-859000         | jenkins | v1.30.1 | 17 Jul 23 13:32 PDT | 17 Jul 23 13:32 PDT |
	|         | systemctl cat kubelet                                |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p calico-859000 sudo                                | calico-859000         | jenkins | v1.30.1 | 17 Jul 23 13:32 PDT | 17 Jul 23 13:32 PDT |
	|         | journalctl -xeu kubelet --all                        |                       |         |         |                     |                     |
	|         | --full --no-pager                                    |                       |         |         |                     |                     |
	| ssh     | -p calico-859000 sudo cat                            | calico-859000         | jenkins | v1.30.1 | 17 Jul 23 13:32 PDT | 17 Jul 23 13:32 PDT |
	|         | /etc/kubernetes/kubelet.conf                         |                       |         |         |                     |                     |
	| ssh     | -p calico-859000 sudo cat                            | calico-859000         | jenkins | v1.30.1 | 17 Jul 23 13:32 PDT | 17 Jul 23 13:32 PDT |
	|         | /var/lib/kubelet/config.yaml                         |                       |         |         |                     |                     |
	| ssh     | -p calico-859000 sudo                                | calico-859000         | jenkins | v1.30.1 | 17 Jul 23 13:32 PDT | 17 Jul 23 13:32 PDT |
	|         | systemctl status docker --all                        |                       |         |         |                     |                     |
	|         | --full --no-pager                                    |                       |         |         |                     |                     |
	| ssh     | -p calico-859000 sudo                                | calico-859000         | jenkins | v1.30.1 | 17 Jul 23 13:32 PDT | 17 Jul 23 13:32 PDT |
	|         | systemctl cat docker                                 |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p calico-859000 sudo cat                            | calico-859000         | jenkins | v1.30.1 | 17 Jul 23 13:32 PDT | 17 Jul 23 13:32 PDT |
	|         | /etc/docker/daemon.json                              |                       |         |         |                     |                     |
	| ssh     | -p calico-859000 sudo docker                         | calico-859000         | jenkins | v1.30.1 | 17 Jul 23 13:32 PDT | 17 Jul 23 13:32 PDT |
	|         | system info                                          |                       |         |         |                     |                     |
	| ssh     | -p calico-859000 sudo                                | calico-859000         | jenkins | v1.30.1 | 17 Jul 23 13:32 PDT | 17 Jul 23 13:32 PDT |
	|         | systemctl status cri-docker                          |                       |         |         |                     |                     |
	|         | --all --full --no-pager                              |                       |         |         |                     |                     |
	| ssh     | -p calico-859000 sudo                                | calico-859000         | jenkins | v1.30.1 | 17 Jul 23 13:32 PDT | 17 Jul 23 13:32 PDT |
	|         | systemctl cat cri-docker                             |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p calico-859000 sudo cat                            | calico-859000         | jenkins | v1.30.1 | 17 Jul 23 13:32 PDT | 17 Jul 23 13:32 PDT |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                       |         |         |                     |                     |
	| ssh     | -p calico-859000 sudo cat                            | calico-859000         | jenkins | v1.30.1 | 17 Jul 23 13:32 PDT | 17 Jul 23 13:33 PDT |
	|         | /usr/lib/systemd/system/cri-docker.service           |                       |         |         |                     |                     |
	| ssh     | -p calico-859000 sudo                                | calico-859000         | jenkins | v1.30.1 | 17 Jul 23 13:33 PDT | 17 Jul 23 13:33 PDT |
	|         | cri-dockerd --version                                |                       |         |         |                     |                     |
	| ssh     | -p calico-859000 sudo                                | calico-859000         | jenkins | v1.30.1 | 17 Jul 23 13:33 PDT | 17 Jul 23 13:33 PDT |
	|         | systemctl status containerd                          |                       |         |         |                     |                     |
	|         | --all --full --no-pager                              |                       |         |         |                     |                     |
	| ssh     | -p calico-859000 sudo                                | calico-859000         | jenkins | v1.30.1 | 17 Jul 23 13:33 PDT | 17 Jul 23 13:33 PDT |
	|         | systemctl cat containerd                             |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p calico-859000 sudo cat                            | calico-859000         | jenkins | v1.30.1 | 17 Jul 23 13:33 PDT | 17 Jul 23 13:33 PDT |
	|         | /lib/systemd/system/containerd.service               |                       |         |         |                     |                     |
	| ssh     | -p calico-859000 sudo cat                            | calico-859000         | jenkins | v1.30.1 | 17 Jul 23 13:33 PDT | 17 Jul 23 13:33 PDT |
	|         | /etc/containerd/config.toml                          |                       |         |         |                     |                     |
	| ssh     | -p calico-859000 sudo                                | calico-859000         | jenkins | v1.30.1 | 17 Jul 23 13:33 PDT | 17 Jul 23 13:33 PDT |
	|         | containerd config dump                               |                       |         |         |                     |                     |
	| ssh     | -p calico-859000 sudo                                | calico-859000         | jenkins | v1.30.1 | 17 Jul 23 13:33 PDT |                     |
	|         | systemctl status crio --all                          |                       |         |         |                     |                     |
	|         | --full --no-pager                                    |                       |         |         |                     |                     |
	| ssh     | -p calico-859000 sudo                                | calico-859000         | jenkins | v1.30.1 | 17 Jul 23 13:33 PDT | 17 Jul 23 13:33 PDT |
	|         | systemctl cat crio --no-pager                        |                       |         |         |                     |                     |
	| ssh     | -p calico-859000 sudo find                           | calico-859000         | jenkins | v1.30.1 | 17 Jul 23 13:33 PDT | 17 Jul 23 13:33 PDT |
	|         | /etc/crio -type f -exec sh -c                        |                       |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                       |         |         |                     |                     |
	| ssh     | -p calico-859000 sudo crio                           | calico-859000         | jenkins | v1.30.1 | 17 Jul 23 13:33 PDT | 17 Jul 23 13:33 PDT |
	|         | config                                               |                       |         |         |                     |                     |
	| delete  | -p calico-859000                                     | calico-859000         | jenkins | v1.30.1 | 17 Jul 23 13:33 PDT | 17 Jul 23 13:33 PDT |
	| start   | -p custom-flannel-859000                             | custom-flannel-859000 | jenkins | v1.30.1 | 17 Jul 23 13:33 PDT |                     |
	|         | --memory=3072 --alsologtostderr                      |                       |         |         |                     |                     |
	|         | --wait=true --wait-timeout=15m                       |                       |         |         |                     |                     |
	|         | --cni=testdata/kube-flannel.yaml                     |                       |         |         |                     |                     |
	|         | --driver=docker                                      |                       |         |         |                     |                     |
	|---------|------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/07/17 13:33:07
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.20.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 13:33:07.235767   51042 out.go:296] Setting OutFile to fd 1 ...
	I0717 13:33:07.235954   51042 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 13:33:07.235959   51042 out.go:309] Setting ErrFile to fd 2...
	I0717 13:33:07.235963   51042 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 13:33:07.236140   51042 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/16890-37879/.minikube/bin
	I0717 13:33:07.237593   51042 out.go:303] Setting JSON to false
	I0717 13:33:07.257283   51042 start.go:128] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":16358,"bootTime":1689609629,"procs":400,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0717 13:33:07.257365   51042 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0717 13:33:07.279571   51042 out.go:177] * [custom-flannel-859000] minikube v1.30.1 on Darwin 13.4.1
	I0717 13:33:07.321998   51042 notify.go:220] Checking for updates...
	I0717 13:33:07.343165   51042 out.go:177]   - MINIKUBE_LOCATION=16890
	I0717 13:33:07.371826   51042 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16890-37879/kubeconfig
	I0717 13:33:07.431211   51042 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0717 13:33:07.473228   51042 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 13:33:07.516318   51042 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16890-37879/.minikube
	I0717 13:33:07.575348   51042 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 13:33:07.597963   51042 config.go:182] Loaded profile config "kubernetes-upgrade-530000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0717 13:33:07.598113   51042 driver.go:373] Setting default libvirt URI to qemu:///system
	I0717 13:33:07.663588   51042 docker.go:121] docker version: linux-24.0.2:Docker Desktop 4.21.1 (114176)
	I0717 13:33:07.663737   51042 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 13:33:07.782670   51042 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:56 OomKillDisable:false NGoroutines:68 SystemTime:2023-07-17 20:33:07.770480089 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:5.15.49-linuxkit-pr OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexS
erverAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=built
in name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Pat
h:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.6] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Command line tool for Docker Scout Vendor:Docker Inc. Version:0.16.1]] Warnings:<n
il>}}
	I0717 13:33:07.825203   51042 out.go:177] * Using the docker driver based on user configuration
	I0717 13:33:07.847418   51042 start.go:298] selected driver: docker
	I0717 13:33:07.847455   51042 start.go:880] validating driver "docker" against <nil>
	I0717 13:33:07.847475   51042 start.go:891] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 13:33:07.851569   51042 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 13:33:07.961327   51042 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:56 OomKillDisable:false NGoroutines:68 SystemTime:2023-07-17 20:33:07.950298568 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:5.15.49-linuxkit-pr OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexS
erverAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=built
in name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Pat
h:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.6] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Command line tool for Docker Scout Vendor:Docker Inc. Version:0.16.1]] Warnings:<n
il>}}
	I0717 13:33:07.961506   51042 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0717 13:33:07.961706   51042 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 13:33:07.988202   51042 out.go:177] * Using Docker Desktop driver with root privileges
	I0717 13:33:08.010326   51042 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0717 13:33:08.010419   51042 start_flags.go:314] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0717 13:33:08.010446   51042 start_flags.go:319] config:
	{Name:custom-flannel-859000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:custom-flannel-859000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket
: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 13:33:08.063989   51042 out.go:177] * Starting control plane node custom-flannel-859000 in cluster custom-flannel-859000
	I0717 13:33:08.085384   51042 cache.go:122] Beginning downloading kic base image for docker with docker
	I0717 13:33:08.107263   51042 out.go:177] * Pulling base image ...
	I0717 13:33:08.150403   51042 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon
	I0717 13:33:08.150435   51042 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0717 13:33:08.150558   51042 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16890-37879/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-amd64.tar.lz4
	I0717 13:33:08.150582   51042 cache.go:57] Caching tarball of preloaded images
	I0717 13:33:08.151303   51042 preload.go:174] Found /Users/jenkins/minikube-integration/16890-37879/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0717 13:33:08.151462   51042 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0717 13:33:08.151919   51042 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/custom-flannel-859000/config.json ...
	I0717 13:33:08.151997   51042 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/custom-flannel-859000/config.json: {Name:mk607eb3fad6f2a77e112b3803c63c92dadb13d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 13:33:08.200952   51042 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon, skipping pull
	I0717 13:33:08.200982   51042 cache.go:145] gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 exists in daemon, skipping load
	I0717 13:33:08.201010   51042 cache.go:195] Successfully downloaded all kic artifacts
	I0717 13:33:08.201064   51042 start.go:365] acquiring machines lock for custom-flannel-859000: {Name:mkd1ff76b21155a61d1fefde69af875c8384fc3d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 13:33:08.201224   51042 start.go:369] acquired machines lock for "custom-flannel-859000" in 145.746µs
	I0717 13:33:08.201252   51042 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-859000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:custom-flannel-859000 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0717 13:33:08.201330   51042 start.go:125] createHost starting for "" (driver="docker")
	I0717 13:33:07.281027   50586 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:57741/healthz ...
	I0717 13:33:07.287679   50586 api_server.go:279] https://127.0.0.1:57741/healthz returned 200:
	ok
	I0717 13:33:07.300092   50586 system_pods.go:86] 5 kube-system pods found
	I0717 13:33:07.300114   50586 system_pods.go:89] "etcd-kubernetes-upgrade-530000" [35cdb698-380d-46d7-9612-b342057876f5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0717 13:33:07.300121   50586 system_pods.go:89] "kube-apiserver-kubernetes-upgrade-530000" [d468007a-4820-4709-98f2-6c8ac22befec] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0717 13:33:07.300134   50586 system_pods.go:89] "kube-controller-manager-kubernetes-upgrade-530000" [fa50995d-5bbf-4ed6-ae6c-5559280722c9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0717 13:33:07.300143   50586 system_pods.go:89] "kube-scheduler-kubernetes-upgrade-530000" [906cf7c0-0e02-4aed-8f95-a3cb9f653d99] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0717 13:33:07.300148   50586 system_pods.go:89] "storage-provisioner" [c282cf7f-5411-4eac-9ab9-c32b87c4d532] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling..)
	I0717 13:33:07.300157   50586 kubeadm.go:620] needs reconfigure: missing components: kube-dns, kube-proxy
	I0717 13:33:07.300164   50586 kubeadm.go:1128] stopping kube-system containers ...
	I0717 13:33:07.300244   50586 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0717 13:33:07.321056   50586 docker.go:462] Stopping containers: [d173c60457f4 625d8d2d7e29 6ab2b1b93564 9a70b770b5ce 0932c307bc81 3786d4deeb00 e589e1913955 0cf4f7ac4c40 83eb1e7abc45 e322b7ca4713 cc873d1fc830 f3c5676a2e8d 482a3469fe47 ab055b9f6063 1535fc8a85f3 cc561b35d108]
	I0717 13:33:07.321147   50586 ssh_runner.go:195] Run: docker stop d173c60457f4 625d8d2d7e29 6ab2b1b93564 9a70b770b5ce 0932c307bc81 3786d4deeb00 e589e1913955 0cf4f7ac4c40 83eb1e7abc45 e322b7ca4713 cc873d1fc830 f3c5676a2e8d 482a3469fe47 ab055b9f6063 1535fc8a85f3 cc561b35d108
	I0717 13:33:08.634350   50586 ssh_runner.go:235] Completed: docker stop d173c60457f4 625d8d2d7e29 6ab2b1b93564 9a70b770b5ce 0932c307bc81 3786d4deeb00 e589e1913955 0cf4f7ac4c40 83eb1e7abc45 e322b7ca4713 cc873d1fc830 f3c5676a2e8d 482a3469fe47 ab055b9f6063 1535fc8a85f3 cc561b35d108: (1.313173907s)
	I0717 13:33:08.634465   50586 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0717 13:33:08.723057   50586 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 13:33:08.737475   50586 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Jul 17 20:32 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Jul 17 20:32 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2039 Jul 17 20:32 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Jul 17 20:32 /etc/kubernetes/scheduler.conf
	
	I0717 13:33:08.737572   50586 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 13:33:08.803927   50586 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 13:33:08.820367   50586 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 13:33:08.837717   50586 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0717 13:33:08.837810   50586 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 13:33:08.852148   50586 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 13:33:08.901159   50586 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0717 13:33:08.901221   50586 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 13:33:08.911271   50586 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 13:33:08.921409   50586 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0717 13:33:08.921424   50586 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 13:33:08.974608   50586 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 13:33:09.821151   50586 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0717 13:33:09.968813   50586 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 13:33:10.025339   50586 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0717 13:33:10.111885   50586 api_server.go:52] waiting for apiserver process to appear ...
	I0717 13:33:10.111964   50586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:33:10.625298   50586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:33:11.124864   50586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:33:11.624864   50586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:33:11.638699   50586 api_server.go:72] duration metric: took 1.526803516s to wait for apiserver process to appear ...
	I0717 13:33:11.638717   50586 api_server.go:88] waiting for apiserver healthz status ...
	I0717 13:33:11.638731   50586 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:57741/healthz ...
	I0717 13:33:08.243255   51042 out.go:204] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0717 13:33:08.243688   51042 start.go:159] libmachine.API.Create for "custom-flannel-859000" (driver="docker")
	I0717 13:33:08.243764   51042 client.go:168] LocalClient.Create starting
	I0717 13:33:08.243946   51042 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16890-37879/.minikube/certs/ca.pem
	I0717 13:33:08.244014   51042 main.go:141] libmachine: Decoding PEM data...
	I0717 13:33:08.244050   51042 main.go:141] libmachine: Parsing certificate...
	I0717 13:33:08.244123   51042 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16890-37879/.minikube/certs/cert.pem
	I0717 13:33:08.244172   51042 main.go:141] libmachine: Decoding PEM data...
	I0717 13:33:08.244189   51042 main.go:141] libmachine: Parsing certificate...
	I0717 13:33:08.245091   51042 cli_runner.go:164] Run: docker network inspect custom-flannel-859000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0717 13:33:08.294443   51042 cli_runner.go:211] docker network inspect custom-flannel-859000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0717 13:33:08.294546   51042 network_create.go:281] running [docker network inspect custom-flannel-859000] to gather additional debugging logs...
	I0717 13:33:08.294562   51042 cli_runner.go:164] Run: docker network inspect custom-flannel-859000
	W0717 13:33:08.343871   51042 cli_runner.go:211] docker network inspect custom-flannel-859000 returned with exit code 1
	I0717 13:33:08.343896   51042 network_create.go:284] error running [docker network inspect custom-flannel-859000]: docker network inspect custom-flannel-859000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network custom-flannel-859000 not found
	I0717 13:33:08.343913   51042 network_create.go:286] output of [docker network inspect custom-flannel-859000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network custom-flannel-859000 not found
	
	** /stderr **
	I0717 13:33:08.343994   51042 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0717 13:33:08.394096   51042 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0717 13:33:08.394464   51042 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00131f630}
	I0717 13:33:08.394481   51042 network_create.go:123] attempt to create docker network custom-flannel-859000 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 65535 ...
	I0717 13:33:08.394561   51042 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=custom-flannel-859000 custom-flannel-859000
	W0717 13:33:08.442487   51042 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=custom-flannel-859000 custom-flannel-859000 returned with exit code 1
	W0717 13:33:08.442528   51042 network_create.go:148] failed to create docker network custom-flannel-859000 192.168.58.0/24 with gateway 192.168.58.1 and mtu of 65535: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=custom-flannel-859000 custom-flannel-859000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0717 13:33:08.442553   51042 network_create.go:115] failed to create docker network custom-flannel-859000 192.168.58.0/24, will retry: subnet is taken
	I0717 13:33:08.443948   51042 network.go:212] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0717 13:33:08.444283   51042 network.go:209] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000f9c4d0}
	I0717 13:33:08.444295   51042 network_create.go:123] attempt to create docker network custom-flannel-859000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 65535 ...
	I0717 13:33:08.444360   51042 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=custom-flannel-859000 custom-flannel-859000
	I0717 13:33:08.527269   51042 network_create.go:107] docker network custom-flannel-859000 192.168.67.0/24 created
	I0717 13:33:08.527304   51042 kic.go:117] calculated static IP "192.168.67.2" for the "custom-flannel-859000" container
	I0717 13:33:08.527439   51042 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0717 13:33:08.583324   51042 cli_runner.go:164] Run: docker volume create custom-flannel-859000 --label name.minikube.sigs.k8s.io=custom-flannel-859000 --label created_by.minikube.sigs.k8s.io=true
	I0717 13:33:08.644962   51042 oci.go:103] Successfully created a docker volume custom-flannel-859000
	I0717 13:33:08.645106   51042 cli_runner.go:164] Run: docker run --rm --name custom-flannel-859000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-flannel-859000 --entrypoint /usr/bin/test -v custom-flannel-859000:/var gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -d /var/lib
	I0717 13:33:09.075602   51042 oci.go:107] Successfully prepared a docker volume custom-flannel-859000
	I0717 13:33:09.075630   51042 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0717 13:33:09.075642   51042 kic.go:190] Starting extracting preloaded images to volume ...
	I0717 13:33:09.075760   51042 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/16890-37879/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v custom-flannel-859000:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -I lz4 -xf /preloaded.tar -C /extractDir
	I0717 13:33:11.962781   51042 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/16890-37879/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v custom-flannel-859000:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -I lz4 -xf /preloaded.tar -C /extractDir: (2.88689265s)
	I0717 13:33:11.962818   51042 kic.go:199] duration metric: took 2.887165 seconds to extract preloaded images to volume
	I0717 13:33:11.962971   51042 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0717 13:33:12.087644   51042 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname custom-flannel-859000 --name custom-flannel-859000 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-flannel-859000 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=custom-flannel-859000 --network custom-flannel-859000 --ip 192.168.67.2 --volume custom-flannel-859000:/var --security-opt apparmor=unconfined --memory=3072mb --memory-swap=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631
	I0717 13:33:13.629126   50586 api_server.go:279] https://127.0.0.1:57741/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0717 13:33:13.629148   50586 api_server.go:103] status: https://127.0.0.1:57741/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0717 13:33:14.129359   50586 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:57741/healthz ...
	I0717 13:33:14.135241   50586 api_server.go:279] https://127.0.0.1:57741/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0717 13:33:14.135256   50586 api_server.go:103] status: https://127.0.0.1:57741/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0717 13:33:14.629225   50586 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:57741/healthz ...
	I0717 13:33:14.634698   50586 api_server.go:279] https://127.0.0.1:57741/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0717 13:33:14.634717   50586 api_server.go:103] status: https://127.0.0.1:57741/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0717 13:33:15.130586   50586 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:57741/healthz ...
	I0717 13:33:15.135842   50586 api_server.go:279] https://127.0.0.1:57741/healthz returned 200:
	ok
	I0717 13:33:15.143135   50586 api_server.go:141] control plane version: v1.27.3
	I0717 13:33:15.143148   50586 api_server.go:131] duration metric: took 3.504414891s to wait for apiserver health ...
	I0717 13:33:15.143154   50586 cni.go:84] Creating CNI manager for ""
	I0717 13:33:15.143162   50586 cni.go:149] "docker" driver + "docker" runtime found, recommending kindnet
	I0717 13:33:15.167706   50586 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0717 13:33:15.189412   50586 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0717 13:33:15.194622   50586 cni.go:188] applying CNI manifest using /var/lib/minikube/binaries/v1.27.3/kubectl ...
	I0717 13:33:15.194633   50586 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0717 13:33:15.211416   50586 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0717 13:33:15.913010   50586 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 13:33:15.917883   50586 system_pods.go:59] 5 kube-system pods found
	I0717 13:33:15.917900   50586 system_pods.go:61] "etcd-kubernetes-upgrade-530000" [35cdb698-380d-46d7-9612-b342057876f5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0717 13:33:15.917910   50586 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-530000" [d468007a-4820-4709-98f2-6c8ac22befec] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0717 13:33:15.917916   50586 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-530000" [fa50995d-5bbf-4ed6-ae6c-5559280722c9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0717 13:33:15.917922   50586 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-530000" [906cf7c0-0e02-4aed-8f95-a3cb9f653d99] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0717 13:33:15.917933   50586 system_pods.go:61] "storage-provisioner" [c282cf7f-5411-4eac-9ab9-c32b87c4d532] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling..)
	I0717 13:33:15.917938   50586 system_pods.go:74] duration metric: took 4.919467ms to wait for pod list to return data ...
	I0717 13:33:15.917958   50586 node_conditions.go:102] verifying NodePressure condition ...
	I0717 13:33:15.922101   50586 node_conditions.go:122] node storage ephemeral capacity is 115273188Ki
	I0717 13:33:15.922117   50586 node_conditions.go:123] node cpu capacity is 6
	I0717 13:33:15.922130   50586 node_conditions.go:105] duration metric: took 4.165796ms to run NodePressure ...
	I0717 13:33:15.922147   50586 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 13:33:16.066861   50586 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 13:33:16.075956   50586 ops.go:34] apiserver oom_adj: -16
	I0717 13:33:16.075969   50586 kubeadm.go:640] restartCluster took 12.775191788s
	I0717 13:33:16.075975   50586 kubeadm.go:406] StartCluster complete in 12.875954447s
	I0717 13:33:16.075991   50586 settings.go:142] acquiring lock: {Name:mk20aac2aa27f8048925e201531865bdb5a37907 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 13:33:16.076108   50586 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/16890-37879/kubeconfig
	I0717 13:33:16.076600   50586 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16890-37879/kubeconfig: {Name:mk0f5d923a936f4479f634933efc75403106a170 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 13:33:16.076881   50586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0717 13:33:16.076927   50586 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0717 13:33:16.077043   50586 addons.go:69] Setting storage-provisioner=true in profile "kubernetes-upgrade-530000"
	I0717 13:33:16.077056   50586 addons.go:69] Setting default-storageclass=true in profile "kubernetes-upgrade-530000"
	I0717 13:33:16.077064   50586 addons.go:231] Setting addon storage-provisioner=true in "kubernetes-upgrade-530000"
	W0717 13:33:16.077072   50586 addons.go:240] addon storage-provisioner should already be in state true
	I0717 13:33:16.077083   50586 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kubernetes-upgrade-530000"
	I0717 13:33:16.077098   50586 config.go:182] Loaded profile config "kubernetes-upgrade-530000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0717 13:33:16.077122   50586 host.go:66] Checking if "kubernetes-upgrade-530000" exists ...
	I0717 13:33:16.077359   50586 kapi.go:59] client config for kubernetes-upgrade-530000: &rest.Config{Host:"https://127.0.0.1:57741", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/kubernetes-upgrade-530000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/kubernetes-upgrade-530000/client.key", CAFile:"/Users/jenkins/minikube-integration/16890-37879/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil)
, CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2586c40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0717 13:33:16.077459   50586 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-530000 --format={{.State.Status}}
	I0717 13:33:16.078454   50586 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-530000 --format={{.State.Status}}
	I0717 13:33:16.083376   50586 kapi.go:248] "coredns" deployment in "kube-system" namespace and "kubernetes-upgrade-530000" context rescaled to 1 replicas
	I0717 13:33:16.083420   50586 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0717 13:33:16.127668   50586 out.go:177] * Verifying Kubernetes components...
	I0717 13:33:16.148968   50586 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 13:33:16.161142   50586 kapi.go:59] client config for kubernetes-upgrade-530000: &rest.Config{Host:"https://127.0.0.1:57741", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/kubernetes-upgrade-530000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/kubernetes-upgrade-530000/client.key", CAFile:"/Users/jenkins/minikube-integration/16890-37879/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil)
, CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2586c40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0717 13:33:16.183740   50586 start.go:890] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0717 13:33:16.183798   50586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-530000
	I0717 13:33:16.196886   50586 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 13:33:16.205512   50586 addons.go:231] Setting addon default-storageclass=true in "kubernetes-upgrade-530000"
	W0717 13:33:16.219847   50586 addons.go:240] addon default-storageclass should already be in state true
	I0717 13:33:16.219902   50586 host.go:66] Checking if "kubernetes-upgrade-530000" exists ...
	I0717 13:33:16.219917   50586 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 13:33:16.219930   50586 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 13:33:16.220028   50586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-530000
	I0717 13:33:16.221247   50586 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-530000 --format={{.State.Status}}
	I0717 13:33:16.264343   50586 api_server.go:52] waiting for apiserver process to appear ...
	I0717 13:33:16.264473   50586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:33:16.281300   50586 api_server.go:72] duration metric: took 197.845973ms to wait for apiserver process to appear ...
	I0717 13:33:16.281323   50586 api_server.go:88] waiting for apiserver healthz status ...
	I0717 13:33:16.281354   50586 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:57741/healthz ...
	I0717 13:33:16.289414   50586 api_server.go:279] https://127.0.0.1:57741/healthz returned 200:
	ok
	I0717 13:33:16.292392   50586 api_server.go:141] control plane version: v1.27.3
	I0717 13:33:16.292409   50586 api_server.go:131] duration metric: took 11.08077ms to wait for apiserver health ...
	I0717 13:33:16.292416   50586 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 13:33:16.293688   50586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57742 SSHKeyPath:/Users/jenkins/minikube-integration/16890-37879/.minikube/machines/kubernetes-upgrade-530000/id_rsa Username:docker}
	I0717 13:33:16.296799   50586 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 13:33:16.296813   50586 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 13:33:16.296926   50586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-530000
	I0717 13:33:16.302755   50586 system_pods.go:59] 5 kube-system pods found
	I0717 13:33:16.302791   50586 system_pods.go:61] "etcd-kubernetes-upgrade-530000" [35cdb698-380d-46d7-9612-b342057876f5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0717 13:33:16.302807   50586 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-530000" [d468007a-4820-4709-98f2-6c8ac22befec] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0717 13:33:16.302820   50586 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-530000" [fa50995d-5bbf-4ed6-ae6c-5559280722c9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0717 13:33:16.302829   50586 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-530000" [906cf7c0-0e02-4aed-8f95-a3cb9f653d99] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0717 13:33:16.302835   50586 system_pods.go:61] "storage-provisioner" [c282cf7f-5411-4eac-9ab9-c32b87c4d532] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling..)
	I0717 13:33:16.302842   50586 system_pods.go:74] duration metric: took 10.41964ms to wait for pod list to return data ...
	I0717 13:33:16.302850   50586 kubeadm.go:581] duration metric: took 219.404048ms to wait for : map[apiserver:true system_pods:true] ...
	I0717 13:33:16.302864   50586 node_conditions.go:102] verifying NodePressure condition ...
	I0717 13:33:16.306609   50586 node_conditions.go:122] node storage ephemeral capacity is 115273188Ki
	I0717 13:33:16.306628   50586 node_conditions.go:123] node cpu capacity is 6
	I0717 13:33:16.306643   50586 node_conditions.go:105] duration metric: took 3.77469ms to run NodePressure ...
	I0717 13:33:16.306655   50586 start.go:228] waiting for startup goroutines ...
	I0717 13:33:16.359858   50586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57742 SSHKeyPath:/Users/jenkins/minikube-integration/16890-37879/.minikube/machines/kubernetes-upgrade-530000/id_rsa Username:docker}
	I0717 13:33:16.402820   50586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 13:33:16.467148   50586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 13:33:16.880471   50586 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0717 13:33:16.900649   50586 addons.go:502] enable addons completed in 823.730358ms: enabled=[storage-provisioner default-storageclass]
	I0717 13:33:16.900682   50586 start.go:233] waiting for cluster config update ...
	I0717 13:33:16.900695   50586 start.go:242] writing updated cluster config ...
	I0717 13:33:16.901139   50586 ssh_runner.go:195] Run: rm -f paused
	I0717 13:33:16.942788   50586 start.go:578] kubectl: 1.27.2, cluster: 1.27.3 (minor skew: 0)
	I0717 13:33:16.963600   50586 out.go:177] * Done! kubectl is now configured to use "kubernetes-upgrade-530000" cluster and "default" namespace by default
	I0717 13:33:12.362969   51042 cli_runner.go:164] Run: docker container inspect custom-flannel-859000 --format={{.State.Running}}
	I0717 13:33:12.412785   51042 cli_runner.go:164] Run: docker container inspect custom-flannel-859000 --format={{.State.Status}}
	I0717 13:33:12.468767   51042 cli_runner.go:164] Run: docker exec custom-flannel-859000 stat /var/lib/dpkg/alternatives/iptables
	I0717 13:33:12.614688   51042 oci.go:144] the created container "custom-flannel-859000" has a running status.
	I0717 13:33:12.614748   51042 kic.go:221] Creating ssh key for kic: /Users/jenkins/minikube-integration/16890-37879/.minikube/machines/custom-flannel-859000/id_rsa...
	I0717 13:33:12.772342   51042 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/16890-37879/.minikube/machines/custom-flannel-859000/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0717 13:33:12.840329   51042 cli_runner.go:164] Run: docker container inspect custom-flannel-859000 --format={{.State.Status}}
	I0717 13:33:12.902013   51042 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0717 13:33:12.902033   51042 kic_runner.go:114] Args: [docker exec --privileged custom-flannel-859000 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0717 13:33:13.001998   51042 cli_runner.go:164] Run: docker container inspect custom-flannel-859000 --format={{.State.Status}}
	I0717 13:33:13.056962   51042 machine.go:88] provisioning docker machine ...
	I0717 13:33:13.057003   51042 ubuntu.go:169] provisioning hostname "custom-flannel-859000"
	I0717 13:33:13.057129   51042 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-859000
	I0717 13:33:13.113812   51042 main.go:141] libmachine: Using SSH client type: native
	I0717 13:33:13.114239   51042 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140d2c0] 0x1410360 <nil>  [] 0s} 127.0.0.1 58396 <nil> <nil>}
	I0717 13:33:13.114255   51042 main.go:141] libmachine: About to run SSH command:
	sudo hostname custom-flannel-859000 && echo "custom-flannel-859000" | sudo tee /etc/hostname
	I0717 13:33:13.253300   51042 main.go:141] libmachine: SSH cmd err, output: <nil>: custom-flannel-859000
	
	I0717 13:33:13.253408   51042 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-859000
	I0717 13:33:13.307049   51042 main.go:141] libmachine: Using SSH client type: native
	I0717 13:33:13.307395   51042 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140d2c0] 0x1410360 <nil>  [] 0s} 127.0.0.1 58396 <nil> <nil>}
	I0717 13:33:13.307408   51042 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scustom-flannel-859000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 custom-flannel-859000/g' /etc/hosts;
				else 
					echo '127.0.1.1 custom-flannel-859000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 13:33:13.434811   51042 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 13:33:13.434835   51042 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/16890-37879/.minikube CaCertPath:/Users/jenkins/minikube-integration/16890-37879/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/16890-37879/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/16890-37879/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/16890-37879/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/16890-37879/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/16890-37879/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/16890-37879/.minikube}
	I0717 13:33:13.434860   51042 ubuntu.go:177] setting up certificates
	I0717 13:33:13.434870   51042 provision.go:83] configureAuth start
	I0717 13:33:13.434972   51042 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-flannel-859000
	I0717 13:33:13.490955   51042 provision.go:138] copyHostCerts
	I0717 13:33:13.491062   51042 exec_runner.go:144] found /Users/jenkins/minikube-integration/16890-37879/.minikube/ca.pem, removing ...
	I0717 13:33:13.491071   51042 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/16890-37879/.minikube/ca.pem
	I0717 13:33:13.491204   51042 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16890-37879/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/16890-37879/.minikube/ca.pem (1078 bytes)
	I0717 13:33:13.491418   51042 exec_runner.go:144] found /Users/jenkins/minikube-integration/16890-37879/.minikube/cert.pem, removing ...
	I0717 13:33:13.491425   51042 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/16890-37879/.minikube/cert.pem
	I0717 13:33:13.491498   51042 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16890-37879/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/16890-37879/.minikube/cert.pem (1123 bytes)
	I0717 13:33:13.491666   51042 exec_runner.go:144] found /Users/jenkins/minikube-integration/16890-37879/.minikube/key.pem, removing ...
	I0717 13:33:13.491672   51042 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/16890-37879/.minikube/key.pem
	I0717 13:33:13.491734   51042 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16890-37879/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/16890-37879/.minikube/key.pem (1679 bytes)
	I0717 13:33:13.491868   51042 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/16890-37879/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/16890-37879/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/16890-37879/.minikube/certs/ca-key.pem org=jenkins.custom-flannel-859000 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube custom-flannel-859000]
	I0717 13:33:13.754823   51042 provision.go:172] copyRemoteCerts
	I0717 13:33:13.754904   51042 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 13:33:13.754979   51042 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-859000
	I0717 13:33:13.806230   51042 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58396 SSHKeyPath:/Users/jenkins/minikube-integration/16890-37879/.minikube/machines/custom-flannel-859000/id_rsa Username:docker}
	I0717 13:33:13.900056   51042 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16890-37879/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 13:33:13.923112   51042 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16890-37879/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0717 13:33:13.944486   51042 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16890-37879/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 13:33:13.965876   51042 provision.go:86] duration metric: configureAuth took 530.988212ms
	I0717 13:33:13.965890   51042 ubuntu.go:193] setting minikube options for container-runtime
	I0717 13:33:13.966045   51042 config.go:182] Loaded profile config "custom-flannel-859000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0717 13:33:13.966111   51042 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-859000
	I0717 13:33:14.016284   51042 main.go:141] libmachine: Using SSH client type: native
	I0717 13:33:14.016630   51042 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140d2c0] 0x1410360 <nil>  [] 0s} 127.0.0.1 58396 <nil> <nil>}
	I0717 13:33:14.016645   51042 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0717 13:33:14.145269   51042 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0717 13:33:14.145283   51042 ubuntu.go:71] root file system type: overlay
	I0717 13:33:14.145382   51042 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0717 13:33:14.145479   51042 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-859000
	I0717 13:33:14.196522   51042 main.go:141] libmachine: Using SSH client type: native
	I0717 13:33:14.196879   51042 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140d2c0] 0x1410360 <nil>  [] 0s} 127.0.0.1 58396 <nil> <nil>}
	I0717 13:33:14.196929   51042 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0717 13:33:14.333893   51042 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0717 13:33:14.334001   51042 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-859000
	I0717 13:33:14.384465   51042 main.go:141] libmachine: Using SSH client type: native
	I0717 13:33:14.384823   51042 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140d2c0] 0x1410360 <nil>  [] 0s} 127.0.0.1 58396 <nil> <nil>}
	I0717 13:33:14.384838   51042 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0717 13:33:15.042445   51042 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2023-07-07 14:50:55.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-07-17 20:33:14.332060762 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0717 13:33:15.042473   51042 machine.go:91] provisioned docker machine in 1.985481788s
	I0717 13:33:15.042483   51042 client.go:171] LocalClient.Create took 6.798692553s
	I0717 13:33:15.042501   51042 start.go:167] duration metric: libmachine.API.Create for "custom-flannel-859000" took 6.79880013s
	I0717 13:33:15.042511   51042 start.go:300] post-start starting for "custom-flannel-859000" (driver="docker")
	I0717 13:33:15.042522   51042 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 13:33:15.042605   51042 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 13:33:15.042672   51042 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-859000
	I0717 13:33:15.093221   51042 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58396 SSHKeyPath:/Users/jenkins/minikube-integration/16890-37879/.minikube/machines/custom-flannel-859000/id_rsa Username:docker}
	I0717 13:33:15.186131   51042 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 13:33:15.190912   51042 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0717 13:33:15.190946   51042 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0717 13:33:15.190957   51042 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0717 13:33:15.190963   51042 info.go:137] Remote host: Ubuntu 22.04.2 LTS
	I0717 13:33:15.190972   51042 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16890-37879/.minikube/addons for local assets ...
	I0717 13:33:15.191074   51042 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16890-37879/.minikube/files for local assets ...
	I0717 13:33:15.191295   51042 filesync.go:149] local asset: /Users/jenkins/minikube-integration/16890-37879/.minikube/files/etc/ssl/certs/383252.pem -> 383252.pem in /etc/ssl/certs
	I0717 13:33:15.191545   51042 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 13:33:15.200401   51042 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16890-37879/.minikube/files/etc/ssl/certs/383252.pem --> /etc/ssl/certs/383252.pem (1708 bytes)
	I0717 13:33:15.223217   51042 start.go:303] post-start completed in 180.695902ms
	I0717 13:33:15.224218   51042 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-flannel-859000
	I0717 13:33:15.275418   51042 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/custom-flannel-859000/config.json ...
	I0717 13:33:15.275875   51042 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 13:33:15.275938   51042 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-859000
	I0717 13:33:15.329431   51042 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58396 SSHKeyPath:/Users/jenkins/minikube-integration/16890-37879/.minikube/machines/custom-flannel-859000/id_rsa Username:docker}
	I0717 13:33:15.419817   51042 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0717 13:33:15.427250   51042 start.go:128] duration metric: createHost completed in 7.225886524s
	I0717 13:33:15.427277   51042 start.go:83] releasing machines lock for "custom-flannel-859000", held for 7.226023654s
	I0717 13:33:15.427401   51042 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-flannel-859000
	I0717 13:33:15.480990   51042 ssh_runner.go:195] Run: cat /version.json
	I0717 13:33:15.481004   51042 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 13:33:15.481086   51042 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-859000
	I0717 13:33:15.481094   51042 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-859000
	I0717 13:33:15.536880   51042 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58396 SSHKeyPath:/Users/jenkins/minikube-integration/16890-37879/.minikube/machines/custom-flannel-859000/id_rsa Username:docker}
	I0717 13:33:15.536947   51042 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58396 SSHKeyPath:/Users/jenkins/minikube-integration/16890-37879/.minikube/machines/custom-flannel-859000/id_rsa Username:docker}
	W0717 13:33:15.627103   51042 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.31.0 -> Actual minikube version: v1.30.1
	I0717 13:33:15.627214   51042 ssh_runner.go:195] Run: systemctl --version
	I0717 13:33:15.750243   51042 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0717 13:33:15.756051   51042 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0717 13:33:15.778722   51042 cni.go:236] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0717 13:33:15.778794   51042 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 13:33:15.803064   51042 cni.go:268] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0717 13:33:15.803086   51042 start.go:469] detecting cgroup driver to use...
	I0717 13:33:15.803101   51042 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0717 13:33:15.803229   51042 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 13:33:15.821375   51042 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0717 13:33:15.833264   51042 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0717 13:33:15.844549   51042 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0717 13:33:15.844644   51042 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0717 13:33:15.855948   51042 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0717 13:33:15.869702   51042 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0717 13:33:15.880677   51042 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0717 13:33:15.890311   51042 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 13:33:15.899958   51042 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0717 13:33:15.910713   51042 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 13:33:15.919835   51042 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 13:33:15.929636   51042 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 13:33:15.999962   51042 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0717 13:33:16.079435   51042 start.go:469] detecting cgroup driver to use...
	I0717 13:33:16.079463   51042 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0717 13:33:16.079527   51042 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0717 13:33:16.096849   51042 cruntime.go:276] skipping containerd shutdown because we are bound to it
	I0717 13:33:16.096960   51042 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0717 13:33:16.116510   51042 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 13:33:16.144890   51042 ssh_runner.go:195] Run: which cri-dockerd
	I0717 13:33:16.151341   51042 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0717 13:33:16.170897   51042 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0717 13:33:16.190121   51042 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0717 13:33:16.281326   51042 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0717 13:33:16.373384   51042 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0717 13:33:16.373400   51042 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0717 13:33:16.413551   51042 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 13:33:16.493214   51042 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0717 13:33:16.790739   51042 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0717 13:33:16.865359   51042 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0717 13:33:16.939575   51042 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0717 13:33:17.012851   51042 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 13:33:17.096641   51042 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0717 13:33:17.109972   51042 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 13:33:17.192103   51042 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0717 13:33:17.281240   51042 start.go:516] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0717 13:33:17.281365   51042 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0717 13:33:17.287337   51042 start.go:537] Will wait 60s for crictl version
	I0717 13:33:17.287420   51042 ssh_runner.go:195] Run: which crictl
	I0717 13:33:17.293393   51042 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 13:33:17.342684   51042 start.go:553] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.4
	RuntimeApiVersion:  v1
	I0717 13:33:17.342767   51042 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0717 13:33:17.367885   51042 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	
	* 
	* ==> Docker <==
	* Jul 17 20:33:01 kubernetes-upgrade-530000 cri-dockerd[13070]: time="2023-07-17T20:33:01Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Jul 17 20:33:01 kubernetes-upgrade-530000 cri-dockerd[13070]: time="2023-07-17T20:33:01Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Jul 17 20:33:01 kubernetes-upgrade-530000 cri-dockerd[13070]: time="2023-07-17T20:33:01Z" level=info msg="Start cri-dockerd grpc backend"
	Jul 17 20:33:01 kubernetes-upgrade-530000 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	Jul 17 20:33:02 kubernetes-upgrade-530000 cri-dockerd[13070]: time="2023-07-17T20:33:02Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/0932c307bc811294e94ab57612db5afdf5ff2ee2a4af977de052b83aa0e93e60/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Jul 17 20:33:02 kubernetes-upgrade-530000 cri-dockerd[13070]: time="2023-07-17T20:33:02Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/3786d4deeb00821670f2fff92cab02f9c4bd1933dbba838bc132c87471c00d3c/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Jul 17 20:33:02 kubernetes-upgrade-530000 cri-dockerd[13070]: time="2023-07-17T20:33:02Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/e589e191395535a1a5309e2e0e0f1942970d76f120ae659c8234b63515596324/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Jul 17 20:33:02 kubernetes-upgrade-530000 cri-dockerd[13070]: time="2023-07-17T20:33:02Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/0cf4f7ac4c4025ec980b2446c58c777cbea9ede4133f799483d456ddc8644a4f/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Jul 17 20:33:07 kubernetes-upgrade-530000 dockerd[12758]: time="2023-07-17T20:33:07.595694094Z" level=info msg="ignoring event" container=e589e191395535a1a5309e2e0e0f1942970d76f120ae659c8234b63515596324 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 17 20:33:07 kubernetes-upgrade-530000 dockerd[12758]: time="2023-07-17T20:33:07.703803311Z" level=info msg="ignoring event" container=6ab2b1b9356402359eb1e0c4ddbf0f3aba0adc434e7cfdb8383d3f51224c98ed module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 17 20:33:07 kubernetes-upgrade-530000 dockerd[12758]: time="2023-07-17T20:33:07.705372302Z" level=info msg="ignoring event" container=0cf4f7ac4c4025ec980b2446c58c777cbea9ede4133f799483d456ddc8644a4f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 17 20:33:07 kubernetes-upgrade-530000 dockerd[12758]: time="2023-07-17T20:33:07.708418283Z" level=info msg="ignoring event" container=0932c307bc811294e94ab57612db5afdf5ff2ee2a4af977de052b83aa0e93e60 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 17 20:33:07 kubernetes-upgrade-530000 dockerd[12758]: time="2023-07-17T20:33:07.713493878Z" level=info msg="ignoring event" container=3786d4deeb00821670f2fff92cab02f9c4bd1933dbba838bc132c87471c00d3c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 17 20:33:07 kubernetes-upgrade-530000 dockerd[12758]: time="2023-07-17T20:33:07.718819260Z" level=info msg="ignoring event" container=d173c60457f41bc1ee9f5b98a39fa1582aa970c0985ebfce065f51f67f813ebb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 17 20:33:07 kubernetes-upgrade-530000 dockerd[12758]: time="2023-07-17T20:33:07.742478247Z" level=info msg="ignoring event" container=9a70b770b5ce982fb70cfe4eee7960252513cc110d3cac21003c84506eb7761f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 17 20:33:08 kubernetes-upgrade-530000 dockerd[12758]: time="2023-07-17T20:33:08.606132807Z" level=info msg="ignoring event" container=625d8d2d7e2968f246f3a061343e293e8411b78308232a8e454d4f7b0c9eda89 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 17 20:33:08 kubernetes-upgrade-530000 cri-dockerd[13070]: time="2023-07-17T20:33:08Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/1733c0ec00997e285a6c749844f65fd0f749c8270de3014ae7eaae5320a2b53d/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Jul 17 20:33:08 kubernetes-upgrade-530000 cri-dockerd[13070]: W0717 20:33:08.845807   13070 logging.go:59] [core] [Server #1] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	Jul 17 20:33:08 kubernetes-upgrade-530000 cri-dockerd[13070]: time="2023-07-17T20:33:08Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/3846d9db50c2abd3f68e91359007b8642cfb9e3277e49b330dabcaef282090a6/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Jul 17 20:33:08 kubernetes-upgrade-530000 cri-dockerd[13070]: W0717 20:33:08.847063   13070 logging.go:59] [core] [Server #1] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	Jul 17 20:33:08 kubernetes-upgrade-530000 cri-dockerd[13070]: time="2023-07-17T20:33:08Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/84ee10ab8699ce64ac4ad6940c32d72dfc80b8f1e58c4e7fd79cc8481c82a10c/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Jul 17 20:33:08 kubernetes-upgrade-530000 cri-dockerd[13070]: W0717 20:33:08.903751   13070 logging.go:59] [core] [Server #1] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	Jul 17 20:33:08 kubernetes-upgrade-530000 cri-dockerd[13070]: time="2023-07-17T20:33:08Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/90c709b9bab30d7174df38ada078cb3390fb942fb14c6986f7d08e18e4c64a8f/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Jul 17 20:33:08 kubernetes-upgrade-530000 cri-dockerd[13070]: W0717 20:33:08.911683   13070 logging.go:59] [core] [Server #1] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	Jul 17 20:33:10 kubernetes-upgrade-530000 cri-dockerd[13070]: time="2023-07-17T20:33:10Z" level=error msg="Failed to retrieve checkpoint for sandbox cc561b35d10839c719caa8b126a39975c1880eef98e917d587647ea04341fa0b: checkpoint is not found"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	86f8afa693ca6       86b6af7dd652c       8 seconds ago       Running             etcd                      2                   1733c0ec00997       etcd-kubernetes-upgrade-530000
	f807f6184c244       41697ceeb70b3       8 seconds ago       Running             kube-scheduler            2                   3846d9db50c2a       kube-scheduler-kubernetes-upgrade-530000
	20b732b95edf3       7cffc01dba0e1       8 seconds ago       Running             kube-controller-manager   2                   90c709b9bab30       kube-controller-manager-kubernetes-upgrade-530000
	c912444c0c40f       08a0c939e61b7       8 seconds ago       Running             kube-apiserver            2                   84ee10ab8699c       kube-apiserver-kubernetes-upgrade-530000
	d173c60457f41       41697ceeb70b3       16 seconds ago      Exited              kube-scheduler            1                   0cf4f7ac4c402       kube-scheduler-kubernetes-upgrade-530000
	625d8d2d7e296       08a0c939e61b7       16 seconds ago      Exited              kube-apiserver            1                   e589e19139553       kube-apiserver-kubernetes-upgrade-530000
	6ab2b1b935640       7cffc01dba0e1       16 seconds ago      Exited              kube-controller-manager   1                   3786d4deeb008       kube-controller-manager-kubernetes-upgrade-530000
	9a70b770b5ce9       86b6af7dd652c       16 seconds ago      Exited              etcd                      1                   0932c307bc811       etcd-kubernetes-upgrade-530000
	
	* 
	* ==> describe nodes <==
	* Name:               kubernetes-upgrade-530000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-530000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=46c8e7c4243e42e98d29628785c0523fbabbd9b5
	                    minikube.k8s.io/name=kubernetes-upgrade-530000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_07_17T13_32_45_0700
	                    minikube.k8s.io/version=v1.30.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 17 Jul 2023 20:32:41 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-530000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 17 Jul 2023 20:33:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 17 Jul 2023 20:33:13 +0000   Mon, 17 Jul 2023 20:32:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 17 Jul 2023 20:33:13 +0000   Mon, 17 Jul 2023 20:32:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 17 Jul 2023 20:33:13 +0000   Mon, 17 Jul 2023 20:32:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Mon, 17 Jul 2023 20:33:13 +0000   Mon, 17 Jul 2023 20:32:39 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    kubernetes-upgrade-530000
	Capacity:
	  cpu:                6
	  ephemeral-storage:  115273188Ki
	  hugepages-2Mi:      0
	  memory:             6085660Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  115273188Ki
	  hugepages-2Mi:      0
	  memory:             6085660Ki
	  pods:               110
	System Info:
	  Machine ID:                 455df8008773479d9c12bf75da71059e
	  System UUID:                455df8008773479d9c12bf75da71059e
	  Boot ID:                    21604644-d35a-4e4f-9198-120c5df14657
	  Kernel Version:             5.15.49-linuxkit-pr
	  OS Image:                   Ubuntu 22.04.2 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.4
	  Kubelet Version:            v1.27.3
	  Kube-Proxy Version:         v1.27.3
	Non-terminated Pods:          (4 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-kubernetes-upgrade-530000                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         34s
	  kube-system                 kube-apiserver-kubernetes-upgrade-530000             250m (4%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         34s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-530000    200m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         36s
	  kube-system                 kube-scheduler-kubernetes-upgrade-530000             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         34s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (10%!)(MISSING)  0 (0%!)(MISSING)
	  memory             100Mi (1%!)(MISSING)  0 (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From     Message
	  ----    ------                   ----               ----     -------
	  Normal  Starting                 40s                kubelet  Starting kubelet.
	  Normal  NodeHasSufficientMemory  40s (x8 over 40s)  kubelet  Node kubernetes-upgrade-530000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    40s (x8 over 40s)  kubelet  Node kubernetes-upgrade-530000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     40s (x7 over 40s)  kubelet  Node kubernetes-upgrade-530000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  40s                kubelet  Updated Node Allocatable limit across pods
	  Normal  Starting                 34s                kubelet  Starting kubelet.
	  Normal  NodeAllocatableEnforced  34s                kubelet  Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  34s                kubelet  Node kubernetes-upgrade-530000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    34s                kubelet  Node kubernetes-upgrade-530000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     34s                kubelet  Node kubernetes-upgrade-530000 status is now: NodeHasSufficientPID
	  Normal  Starting                 8s                 kubelet  Starting kubelet.
	  Normal  NodeHasSufficientMemory  8s (x8 over 8s)    kubelet  Node kubernetes-upgrade-530000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8s (x8 over 8s)    kubelet  Node kubernetes-upgrade-530000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8s (x7 over 8s)    kubelet  Node kubernetes-upgrade-530000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8s                 kubelet  Updated Node Allocatable limit across pods
	
	* 
	* ==> dmesg <==
	* [  +0.000175] FS-Cache: O-key=[8] '360ba10600000000'
	[  +0.000053] FS-Cache: N-cookie c=0000001f [p=00000017 fl=2 nc=0 na=1]
	[  +0.000039] FS-Cache: N-cookie d=0000000069b0272a{9p.inode} n=00000000ea49db68
	[  +0.000070] FS-Cache: N-key=[8] '360ba10600000000'
	[  +0.001560] FS-Cache: Duplicate cookie detected
	[  +0.000047] FS-Cache: O-cookie c=00000019 [p=00000017 fl=226 nc=0 na=1]
	[  +0.000047] FS-Cache: O-cookie d=0000000069b0272a{9p.inode} n=000000005f363454
	[  +0.000165] FS-Cache: O-key=[8] '360ba10600000000'
	[  +0.000076] FS-Cache: N-cookie c=00000020 [p=00000017 fl=2 nc=0 na=1]
	[  +0.000056] FS-Cache: N-cookie d=0000000069b0272a{9p.inode} n=000000004108376a
	[  +0.000078] FS-Cache: N-key=[8] '360ba10600000000'
	[  +1.639248] FS-Cache: Duplicate cookie detected
	[  +0.000036] FS-Cache: O-cookie c=0000001a [p=00000017 fl=226 nc=0 na=1]
	[  +0.000040] FS-Cache: O-cookie d=0000000069b0272a{9p.inode} n=00000000b6198e2a
	[  +0.000076] FS-Cache: O-key=[8] '350ba10600000000'
	[  +0.000067] FS-Cache: N-cookie c=00000023 [p=00000017 fl=2 nc=0 na=1]
	[  +0.000082] FS-Cache: N-cookie d=0000000069b0272a{9p.inode} n=00000000154f1904
	[  +0.000070] FS-Cache: N-key=[8] '350ba10600000000'
	[  +0.401136] FS-Cache: Duplicate cookie detected
	[  +0.000042] FS-Cache: O-cookie c=0000001d [p=00000017 fl=226 nc=0 na=1]
	[  +0.000055] FS-Cache: O-cookie d=0000000069b0272a{9p.inode} n=00000000e0b89fe5
	[  +0.000089] FS-Cache: O-key=[8] '580ba10600000000'
	[  +0.000039] FS-Cache: N-cookie c=00000024 [p=00000017 fl=2 nc=0 na=1]
	[  +0.000079] FS-Cache: N-cookie d=0000000069b0272a{9p.inode} n=00000000a8e0de7a
	[  +0.000082] FS-Cache: N-key=[8] '580ba10600000000'
	
	* 
	* ==> etcd [86f8afa693ca] <==
	* {"level":"info","ts":"2023-07-17T20:33:11.334Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-07-17T20:33:11.334Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-07-17T20:33:11.335Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2023-07-17T20:33:11.335Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2023-07-17T20:33:11.335Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-17T20:33:11.335Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-17T20:33:11.338Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-07-17T20:33:11.338Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-07-17T20:33:11.338Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-07-17T20:33:11.338Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2023-07-17T20:33:11.338Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2023-07-17T20:33:12.613Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 3"}
	{"level":"info","ts":"2023-07-17T20:33:12.613Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 3"}
	{"level":"info","ts":"2023-07-17T20:33:12.613Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2023-07-17T20:33:12.613Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 4"}
	{"level":"info","ts":"2023-07-17T20:33:12.613Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 4"}
	{"level":"info","ts":"2023-07-17T20:33:12.613Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 4"}
	{"level":"info","ts":"2023-07-17T20:33:12.613Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 4"}
	{"level":"info","ts":"2023-07-17T20:33:12.614Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:kubernetes-upgrade-530000 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2023-07-17T20:33:12.614Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-07-17T20:33:12.614Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-07-17T20:33:12.615Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2023-07-17T20:33:12.615Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-07-17T20:33:12.617Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-07-17T20:33:12.617Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	* 
	* ==> etcd [9a70b770b5ce] <==
	* {"level":"info","ts":"2023-07-17T20:33:03.224Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-07-17T20:33:03.224Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-07-17T20:33:03.224Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-07-17T20:33:03.224Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2023-07-17T20:33:03.224Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2023-07-17T20:33:04.713Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2023-07-17T20:33:04.713Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2023-07-17T20:33:04.713Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2023-07-17T20:33:04.713Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2023-07-17T20:33:04.713Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2023-07-17T20:33:04.713Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2023-07-17T20:33:04.713Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2023-07-17T20:33:04.714Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:kubernetes-upgrade-530000 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2023-07-17T20:33:04.714Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-07-17T20:33:04.714Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-07-17T20:33:04.715Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-07-17T20:33:04.715Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-07-17T20:33:04.715Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2023-07-17T20:33:04.715Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-07-17T20:33:07.616Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2023-07-17T20:33:07.617Z","caller":"embed/etcd.go:373","msg":"closing etcd server","name":"kubernetes-upgrade-530000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	{"level":"info","ts":"2023-07-17T20:33:07.636Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"ea7e25599daad906","current-leader-member-id":"ea7e25599daad906"}
	{"level":"info","ts":"2023-07-17T20:33:07.637Z","caller":"embed/etcd.go:568","msg":"stopping serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2023-07-17T20:33:07.698Z","caller":"embed/etcd.go:573","msg":"stopped serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2023-07-17T20:33:07.698Z","caller":"embed/etcd.go:375","msg":"closed etcd server","name":"kubernetes-upgrade-530000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	
	* 
	* ==> kernel <==
	*  20:33:19 up  4:31,  0 users,  load average: 3.13, 2.11, 1.38
	Linux kubernetes-upgrade-530000 5.15.49-linuxkit-pr #1 SMP Thu May 25 07:17:40 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.2 LTS"
	
	* 
	* ==> kube-apiserver [625d8d2d7e29] <==
	* }. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 20:33:07.626852       1 logging.go:59] [core] [Channel #22 SubChannel #23] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 20:33:07.626822       1 logging.go:59] [core] [Channel #130 SubChannel #131] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 20:33:07.626983       1 logging.go:59] [core] [Channel #148 SubChannel #149] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	
	* 
	* ==> kube-apiserver [c912444c0c40] <==
	* I0717 20:33:13.603220       1 apiservice_controller.go:97] Starting APIServiceRegistrationController
	I0717 20:33:13.604301       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I0717 20:33:13.603248       1 controller.go:85] Starting OpenAPI controller
	I0717 20:33:13.608196       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0717 20:33:13.608249       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I0717 20:33:13.698273       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0717 20:33:13.707088       1 apf_controller.go:366] Running API Priority and Fairness config worker
	I0717 20:33:13.707129       1 apf_controller.go:369] Running API Priority and Fairness periodic rebalancing process
	I0717 20:33:13.707150       1 shared_informer.go:318] Caches are synced for configmaps
	I0717 20:33:13.707196       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0717 20:33:13.707464       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0717 20:33:13.707660       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0717 20:33:13.709294       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0717 20:33:13.709347       1 aggregator.go:152] initial CRD sync complete...
	I0717 20:33:13.709352       1 autoregister_controller.go:141] Starting autoregister controller
	I0717 20:33:13.709355       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0717 20:33:13.709360       1 cache.go:39] Caches are synced for autoregister controller
	I0717 20:33:13.722335       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0717 20:33:14.399196       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0717 20:33:14.609735       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0717 20:33:15.648689       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0717 20:33:15.906370       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0717 20:33:16.003601       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0717 20:33:16.050270       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0717 20:33:16.057057       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	* 
	* ==> kube-controller-manager [20b732b95edf] <==
	* I0717 20:33:15.719616       1 controllermanager.go:638] "Started controller" controller="bootstrapsigner"
	I0717 20:33:15.719717       1 shared_informer.go:311] Waiting for caches to sync for bootstrap_signer
	E0717 20:33:15.721818       1 core.go:92] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail"
	I0717 20:33:15.721835       1 controllermanager.go:616] "Warning: skipping controller" controller="service"
	I0717 20:33:15.721843       1 core.go:228] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes."
	I0717 20:33:15.721848       1 controllermanager.go:616] "Warning: skipping controller" controller="route"
	E0717 20:33:15.724806       1 core.go:213] "Failed to start cloud node lifecycle controller" err="no cloud provider provided"
	I0717 20:33:15.724851       1 controllermanager.go:616] "Warning: skipping controller" controller="cloud-node-lifecycle"
	I0717 20:33:15.726841       1 controllermanager.go:638] "Started controller" controller="endpoint"
	I0717 20:33:15.727095       1 endpoints_controller.go:172] Starting endpoint controller
	I0717 20:33:15.727132       1 shared_informer.go:311] Waiting for caches to sync for endpoint
	I0717 20:33:15.729526       1 controllermanager.go:638] "Started controller" controller="endpointslice"
	I0717 20:33:15.730097       1 endpointslice_controller.go:252] Starting endpoint slice controller
	I0717 20:33:15.730123       1 shared_informer.go:311] Waiting for caches to sync for endpoint_slice
	I0717 20:33:15.733291       1 controllermanager.go:638] "Started controller" controller="replicationcontroller"
	I0717 20:33:15.733533       1 replica_set.go:201] "Starting controller" name="replicationcontroller"
	I0717 20:33:15.733585       1 shared_informer.go:311] Waiting for caches to sync for ReplicationController
	I0717 20:33:15.740075       1 controllermanager.go:638] "Started controller" controller="root-ca-cert-publisher"
	I0717 20:33:15.740190       1 publisher.go:101] Starting root CA certificate configmap publisher
	I0717 20:33:15.740197       1 shared_informer.go:311] Waiting for caches to sync for crt configmap
	I0717 20:33:15.743061       1 shared_informer.go:318] Caches are synced for tokens
	I0717 20:33:15.748512       1 controllermanager.go:638] "Started controller" controller="disruption"
	I0717 20:33:15.748741       1 disruption.go:423] Sending events to api server.
	I0717 20:33:15.749236       1 disruption.go:434] Starting disruption controller
	I0717 20:33:15.749293       1 shared_informer.go:311] Waiting for caches to sync for disruption
	
	* 
	* ==> kube-controller-manager [6ab2b1b93564] <==
	* I0717 20:33:03.705698       1 serving.go:348] Generated self-signed cert in-memory
	I0717 20:33:04.306502       1 controllermanager.go:187] "Starting" version="v1.27.3"
	I0717 20:33:04.306556       1 controllermanager.go:189] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 20:33:04.307772       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0717 20:33:04.307948       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0717 20:33:04.308214       1 secure_serving.go:210] Serving securely on 127.0.0.1:10257
	I0717 20:33:04.308333       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	
	* 
	* ==> kube-scheduler [d173c60457f4] <==
	* I0717 20:33:03.902593       1 serving.go:348] Generated self-signed cert in-memory
	W0717 20:33:05.761401       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0717 20:33:05.761471       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0717 20:33:05.761485       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0717 20:33:05.761491       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0717 20:33:05.816568       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.27.3"
	I0717 20:33:05.816781       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 20:33:05.818116       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0717 20:33:05.818228       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0717 20:33:05.819283       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0717 20:33:05.818460       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0717 20:33:05.920169       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0717 20:33:07.610210       1 scheduling_queue.go:1135] "Error while retrieving next pod from scheduling queue" err="scheduling queue is closed"
	I0717 20:33:07.610461       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0717 20:33:07.611107       1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
	E0717 20:33:07.611280       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kube-scheduler [f807f6184c24] <==
	* I0717 20:33:11.904482       1 serving.go:348] Generated self-signed cert in-memory
	I0717 20:33:13.641917       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.27.3"
	I0717 20:33:13.641967       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 20:33:13.700729       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I0717 20:33:13.700776       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0717 20:33:13.700836       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0717 20:33:13.700847       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0717 20:33:13.700837       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0717 20:33:13.701421       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0717 20:33:13.701555       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0717 20:33:13.701619       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0717 20:33:13.802103       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0717 20:33:13.802227       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	I0717 20:33:13.802812       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	
	* 
	* ==> kubelet <==
	* Jul 17 20:33:10 kubernetes-upgrade-530000 kubelet[14865]: I0717 20:33:10.312384   14865 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f491a1da89be31ae218f04c20080d8f5-kubeconfig\") pod \"kube-controller-manager-kubernetes-upgrade-530000\" (UID: \"f491a1da89be31ae218f04c20080d8f5\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-530000"
	Jul 17 20:33:10 kubernetes-upgrade-530000 kubelet[14865]: I0717 20:33:10.312400   14865 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f491a1da89be31ae218f04c20080d8f5-usr-local-share-ca-certificates\") pod \"kube-controller-manager-kubernetes-upgrade-530000\" (UID: \"f491a1da89be31ae218f04c20080d8f5\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-530000"
	Jul 17 20:33:10 kubernetes-upgrade-530000 kubelet[14865]: I0717 20:33:10.312418   14865 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/47d2c11523f05a790ba84d4f667ddea9-etcd-certs\") pod \"etcd-kubernetes-upgrade-530000\" (UID: \"47d2c11523f05a790ba84d4f667ddea9\") " pod="kube-system/etcd-kubernetes-upgrade-530000"
	Jul 17 20:33:10 kubernetes-upgrade-530000 kubelet[14865]: I0717 20:33:10.312432   14865 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/33f706a005a13c62407823d150949ba1-k8s-certs\") pod \"kube-apiserver-kubernetes-upgrade-530000\" (UID: \"33f706a005a13c62407823d150949ba1\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-530000"
	Jul 17 20:33:10 kubernetes-upgrade-530000 kubelet[14865]: I0717 20:33:10.312470   14865 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/33f706a005a13c62407823d150949ba1-usr-share-ca-certificates\") pod \"kube-apiserver-kubernetes-upgrade-530000\" (UID: \"33f706a005a13c62407823d150949ba1\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-530000"
	Jul 17 20:33:10 kubernetes-upgrade-530000 kubelet[14865]: I0717 20:33:10.433806   14865 kubelet_node_status.go:70] "Attempting to register node" node="kubernetes-upgrade-530000"
	Jul 17 20:33:10 kubernetes-upgrade-530000 kubelet[14865]: E0717 20:33:10.434291   14865 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.76.2:8443: connect: connection refused" node="kubernetes-upgrade-530000"
	Jul 17 20:33:10 kubernetes-upgrade-530000 kubelet[14865]: I0717 20:33:10.544071   14865 scope.go:115] "RemoveContainer" containerID="625d8d2d7e2968f246f3a061343e293e8411b78308232a8e454d4f7b0c9eda89"
	Jul 17 20:33:10 kubernetes-upgrade-530000 kubelet[14865]: I0717 20:33:10.550430   14865 scope.go:115] "RemoveContainer" containerID="6ab2b1b9356402359eb1e0c4ddbf0f3aba0adc434e7cfdb8383d3f51224c98ed"
	Jul 17 20:33:10 kubernetes-upgrade-530000 kubelet[14865]: I0717 20:33:10.561110   14865 scope.go:115] "RemoveContainer" containerID="d173c60457f41bc1ee9f5b98a39fa1582aa970c0985ebfce065f51f67f813ebb"
	Jul 17 20:33:10 kubernetes-upgrade-530000 kubelet[14865]: I0717 20:33:10.569515   14865 scope.go:115] "RemoveContainer" containerID="9a70b770b5ce982fb70cfe4eee7960252513cc110d3cac21003c84506eb7761f"
	Jul 17 20:33:10 kubernetes-upgrade-530000 kubelet[14865]: E0717 20:33:10.712111   14865 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-530000?timeout=10s\": dial tcp 192.168.76.2:8443: connect: connection refused" interval="800ms"
	Jul 17 20:33:10 kubernetes-upgrade-530000 kubelet[14865]: I0717 20:33:10.842524   14865 kubelet_node_status.go:70] "Attempting to register node" node="kubernetes-upgrade-530000"
	Jul 17 20:33:10 kubernetes-upgrade-530000 kubelet[14865]: E0717 20:33:10.842870   14865 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.76.2:8443: connect: connection refused" node="kubernetes-upgrade-530000"
	Jul 17 20:33:11 kubernetes-upgrade-530000 kubelet[14865]: W0717 20:33:11.117743   14865 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	Jul 17 20:33:11 kubernetes-upgrade-530000 kubelet[14865]: E0717 20:33:11.117801   14865 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	Jul 17 20:33:11 kubernetes-upgrade-530000 kubelet[14865]: W0717 20:33:11.244935   14865 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)kubernetes-upgrade-530000&limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	Jul 17 20:33:11 kubernetes-upgrade-530000 kubelet[14865]: E0717 20:33:11.245073   14865 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)kubernetes-upgrade-530000&limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	Jul 17 20:33:11 kubernetes-upgrade-530000 kubelet[14865]: I0717 20:33:11.653420   14865 kubelet_node_status.go:70] "Attempting to register node" node="kubernetes-upgrade-530000"
	Jul 17 20:33:13 kubernetes-upgrade-530000 kubelet[14865]: I0717 20:33:13.722760   14865 kubelet_node_status.go:108] "Node was previously registered" node="kubernetes-upgrade-530000"
	Jul 17 20:33:13 kubernetes-upgrade-530000 kubelet[14865]: I0717 20:33:13.723233   14865 kubelet_node_status.go:73] "Successfully registered node" node="kubernetes-upgrade-530000"
	Jul 17 20:33:14 kubernetes-upgrade-530000 kubelet[14865]: I0717 20:33:14.065643   14865 apiserver.go:52] "Watching apiserver"
	Jul 17 20:33:14 kubernetes-upgrade-530000 kubelet[14865]: I0717 20:33:14.110340   14865 desired_state_of_world_populator.go:153] "Finished populating initial desired state of world"
	Jul 17 20:33:14 kubernetes-upgrade-530000 kubelet[14865]: I0717 20:33:14.142589   14865 reconciler.go:41] "Reconciler: start to sync state"
	Jul 17 20:33:14 kubernetes-upgrade-530000 kubelet[14865]: E0717 20:33:14.354675   14865 kubelet.go:1856] "Failed creating a mirror pod for" err="pods \"kube-apiserver-kubernetes-upgrade-530000\" already exists" pod="kube-system/kube-apiserver-kubernetes-upgrade-530000"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-530000 -n kubernetes-upgrade-530000
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-530000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: storage-provisioner
helpers_test.go:274: ======> post-mortem[TestKubernetesUpgrade]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context kubernetes-upgrade-530000 describe pod storage-provisioner
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-530000 describe pod storage-provisioner: exit status 1 (58.9368ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context kubernetes-upgrade-530000 describe pod storage-provisioner: exit status 1
helpers_test.go:175: Cleaning up "kubernetes-upgrade-530000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p kubernetes-upgrade-530000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p kubernetes-upgrade-530000: (2.81451313s)
--- FAIL: TestKubernetesUpgrade (571.66s)

                                                
                                    
x
+
TestMissingContainerUpgrade (53.88s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:321: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.9.0.3372171771.exe start -p missing-upgrade-491000 --memory=2200 --driver=docker 
version_upgrade_test.go:321: (dbg) Non-zero exit: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.9.0.3372171771.exe start -p missing-upgrade-491000 --memory=2200 --driver=docker : exit status 70 (37.710351991s)

                                                
                                                
-- stdout --
	* [missing-upgrade-491000] minikube v1.9.0 on Darwin 13.4.1
	  - MINIKUBE_LOCATION=16890
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16890-37879/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16890-37879/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Pulling base image ...
	* Downloading Kubernetes v1.18.0 preload ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	! StartHost failed, but will try again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-07-17 20:23:21.654918978 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* Deleting "missing-upgrade-491000" in docker ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	* StartHost failed again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-07-17 20:23:35.501918846 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	  - Run: "minikube delete -p missing-upgrade-491000", then "minikube start -p missing-upgrade-491000 --alsologtostderr -v=1" to try again with more logging

                                                
                                                
-- /stdout --
** stderr ** 
	    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 192.13 KiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 2.11 MiB /    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 10.33 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 15.85 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 23.99 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 32.34 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 40.20 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 48.30 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 56.42 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 62.04 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 70.40 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 77.63 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
: 85.77 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 93.86 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 101.24 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 109.63 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 120.52 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 130.13 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 143.12 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 152.37 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 160.33 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 168.02 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 175.28 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 188.12 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 200.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.
lz4: 210.19 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 214.30 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 220.48 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 228.08 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 236.19 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 242.47 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 251.25 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 264.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 273.77 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 284.19 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 292.98 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 302.19 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 309.09 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.t
ar.lz4: 320.22 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 331.53 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 342.81 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 351.98 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 364.08 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 368.20 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 376.30 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 383.73 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 390.56 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 401.37 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 411.77 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 419.86 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 427.77 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd6
4.tar.lz4: 438.58 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 443.59 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 452.28 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 461.91 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 470.19 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 480.25 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 490.25 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 500.31 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 510.37 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 520.23 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 530.22 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 540.27 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 542.91 MiB* 
	X Unable to start VM after repeated tries. Please try {{'minikube delete' if possible: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-07-17 20:23:35.501918846 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:321: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.9.0.3372171771.exe start -p missing-upgrade-491000 --memory=2200 --driver=docker 
version_upgrade_test.go:321: (dbg) Non-zero exit: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.9.0.3372171771.exe start -p missing-upgrade-491000 --memory=2200 --driver=docker : exit status 70 (4.156887614s)

                                                
                                                
-- stdout --
	* [missing-upgrade-491000] minikube v1.9.0 on Darwin 13.4.1
	  - MINIKUBE_LOCATION=16890
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16890-37879/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16890-37879/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Updating the running docker "missing-upgrade-491000" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:321: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.9.0.3372171771.exe start -p missing-upgrade-491000 --memory=2200 --driver=docker 
version_upgrade_test.go:321: (dbg) Non-zero exit: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.9.0.3372171771.exe start -p missing-upgrade-491000 --memory=2200 --driver=docker : exit status 70 (4.159438575s)

                                                
                                                
-- stdout --
	* [missing-upgrade-491000] minikube v1.9.0 on Darwin 13.4.1
	  - MINIKUBE_LOCATION=16890
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16890-37879/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16890-37879/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Updating the running docker "missing-upgrade-491000" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:327: release start failed: exit status 70
panic.go:522: *** TestMissingContainerUpgrade FAILED at 2023-07-17 13:23:49.067788 -0700 PDT m=+2413.635580465
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMissingContainerUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect missing-upgrade-491000
helpers_test.go:235: (dbg) docker inspect missing-upgrade-491000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "28e95fdb66dd5e6dbc05dbb08cf9574247fa15ea7d8cf4e630f385c11c7adb07",
	        "Created": "2023-07-17T20:23:29.674097671Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 614407,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-07-17T20:23:29.855102003Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:11589cdc9ef4b67a64cc243dd3cf013e81ad02bbed105fc37dc07aa272044680",
	        "ResolvConfPath": "/var/lib/docker/containers/28e95fdb66dd5e6dbc05dbb08cf9574247fa15ea7d8cf4e630f385c11c7adb07/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/28e95fdb66dd5e6dbc05dbb08cf9574247fa15ea7d8cf4e630f385c11c7adb07/hostname",
	        "HostsPath": "/var/lib/docker/containers/28e95fdb66dd5e6dbc05dbb08cf9574247fa15ea7d8cf4e630f385c11c7adb07/hosts",
	        "LogPath": "/var/lib/docker/containers/28e95fdb66dd5e6dbc05dbb08cf9574247fa15ea7d8cf4e630f385c11c7adb07/28e95fdb66dd5e6dbc05dbb08cf9574247fa15ea7d8cf4e630f385c11c7adb07-json.log",
	        "Name": "/missing-upgrade-491000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "missing-upgrade-491000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/6b5da1f320f6be399ab2d0cd055b0b2cbdd721c120f571bd35919cd8261cd6b7-init/diff:/var/lib/docker/overlay2/b092b3b5a7542fef481e5f60a76693b4dba611ccd25e5f4b7e2ad92e85e41bfd/diff:/var/lib/docker/overlay2/edc22f6d72adbe2294de0d8035449760185bc55bde93b2ac5045b1525989ce6f/diff:/var/lib/docker/overlay2/0fd4f6653596f2c165d2881387c8d2820322bc692e1d0e72dcfa878409d0d793/diff:/var/lib/docker/overlay2/be52d2cad56808f531863336531c4c9560737122e8ff4972b0850085bcf6d7d3/diff:/var/lib/docker/overlay2/2f96e757a559e43114212d52aa90b5b5d6f60dd0041ad53c3d54ad5ff0e5e31d/diff:/var/lib/docker/overlay2/5692384f9e4c7573deebe55fff002cca1f52dba8a44609746ee58e4fd07b37d1/diff:/var/lib/docker/overlay2/3329991389a0b381baa38445ad43709e269a37b065240fb1056e54f120662219/diff:/var/lib/docker/overlay2/e49e4276d70ba4816de90a4fbcf888f1eebee1d7cbcb7f86607e75197fbc0b4b/diff:/var/lib/docker/overlay2/4fbf7baebf2866b65f86dfeb4ac76d905e0d918cc57454a2113ebcf81b150abc/diff:/var/lib/docker/overlay2/2666a1
36a8ebce5cb9f8d8c18104273503e26d4150a0ff14295c7dc7e4d62487/diff:/var/lib/docker/overlay2/11947e02eaf4e109c4b6aa1b599e5699c5cff8c5b3694358680af2c2d2f8a63d/diff:/var/lib/docker/overlay2/785d07e5c82f6290d3f36a262e695bc299cbe4918f0a4f3b5758b9e266b7297d/diff:/var/lib/docker/overlay2/fd250aa52b12fc4f37cb44ba3a509c4194798df2d97476391d64c2653f52a87d/diff:/var/lib/docker/overlay2/4144a2900350bef3c5d08f14c9574e43eed5d7fa3e365129e9bfad041f08ad25/diff:/var/lib/docker/overlay2/6e72e826814d1f6895446f01486c733272f29711a4fedf035a56a9769d641069/diff:/var/lib/docker/overlay2/bf1baab05184c9401fc3ca5f76f4ee9d5d369b950163a7bbf0e115502b8bf1fc/diff:/var/lib/docker/overlay2/9ec49f7f622e21fb7909bf83b68516f8b576b5927e994135445f15018b4f5ee8/diff:/var/lib/docker/overlay2/752394829c5b2c0a88dd4a6a6376e359c9586d01e7596527aa3e500bb8445423/diff:/var/lib/docker/overlay2/298a166df8d125594292e4afaeaa6605b7bc7109661bd12f90c890d70fc1ad45/diff:/var/lib/docker/overlay2/5643fb07bc7d31723a33e67e3ee6b942ba4b66fcee0aab4b6625fd26ec67f208/diff:/var/lib/d
ocker/overlay2/e28bb68e5a055c51f3ecd0023e92024e8c9f07a69bc114f8af4fc2aa20e8ff1a/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6b5da1f320f6be399ab2d0cd055b0b2cbdd721c120f571bd35919cd8261cd6b7/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6b5da1f320f6be399ab2d0cd055b0b2cbdd721c120f571bd35919cd8261cd6b7/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6b5da1f320f6be399ab2d0cd055b0b2cbdd721c120f571bd35919cd8261cd6b7/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "missing-upgrade-491000",
	                "Source": "/var/lib/docker/volumes/missing-upgrade-491000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "missing-upgrade-491000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	                "container=docker"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "missing-upgrade-491000",
	                "name.minikube.sigs.k8s.io": "missing-upgrade-491000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1a071bc88b1f9124e2ebf0b99a1ed613b008cbe2aac301fa22f174303f8c5ce1",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57434"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57435"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57436"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/1a071bc88b1f",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "ef36a7cceab42e47a6f2d1034983f0544bf169a5fd4fdcc8f94d191185d0656d",
	            "Gateway": "172.17.0.1",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "172.17.0.2",
	            "IPPrefixLen": 16,
	            "IPv6Gateway": "",
	            "MacAddress": "02:42:ac:11:00:02",
	            "Networks": {
	                "bridge": {
	                    "IPAMConfig": null,
	                    "Links": null,
	                    "Aliases": null,
	                    "NetworkID": "3361ceefc20ba5c0ef4fd4641e089b49ab7465471b3fe0fa1db408704e009093",
	                    "EndpointID": "ef36a7cceab42e47a6f2d1034983f0544bf169a5fd4fdcc8f94d191185d0656d",
	                    "Gateway": "172.17.0.1",
	                    "IPAddress": "172.17.0.2",
	                    "IPPrefixLen": 16,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:ac:11:00:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p missing-upgrade-491000 -n missing-upgrade-491000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p missing-upgrade-491000 -n missing-upgrade-491000: exit status 6 (346.292368ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 13:23:49.455208   47765 status.go:415] kubeconfig endpoint: extract IP: "missing-upgrade-491000" does not appear in /Users/jenkins/minikube-integration/16890-37879/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "missing-upgrade-491000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "missing-upgrade-491000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p missing-upgrade-491000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p missing-upgrade-491000: (2.142488421s)
--- FAIL: TestMissingContainerUpgrade (53.88s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (47s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:195: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.9.0.2299511446.exe start -p stopped-upgrade-327000 --memory=2200 --vm-driver=docker 
E0717 13:25:12.979563   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/skaffold-508000/client.crt: no such file or directory
E0717 13:25:36.621584   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/addons-702000/client.crt: no such file or directory
version_upgrade_test.go:195: (dbg) Non-zero exit: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.9.0.2299511446.exe start -p stopped-upgrade-327000 --memory=2200 --vm-driver=docker : exit status 70 (36.417343058s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-327000] minikube v1.9.0 on Darwin 13.4.1
	  - MINIKUBE_LOCATION=16890
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16890-37879/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	  - KUBECONFIG=/var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/legacy_kubeconfig3292246259
	* Using the docker driver based on user configuration
	* Pulling base image ...
	* Downloading Kubernetes v1.18.0 preload ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	! StartHost failed, but will try again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-07-17 20:25:30.209119896 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* Deleting "stopped-upgrade-327000" in docker ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	* StartHost failed again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-07-17 20:25:44.440119761 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	  - Run: "minikube delete -p stopped-upgrade-327000", then "minikube start -p stopped-upgrade-327000 --alsologtostderr -v=1" to try again with more logging

                                                
                                                
-- /stdout --
** stderr ** 
	    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 196.17 KiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 2.27 MiB /    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 9.80 MiB /    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 17.29 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 25.27 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 33.24 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 40.25 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 47.87 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 56.01 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 63.63 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 71.55 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 79.59 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
: 87.36 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 94.72 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 104.95 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 112.59 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 120.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 131.41 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 142.14 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 152.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 163.42 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 176.17 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 188.25 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 199.95 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 210.22 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.
lz4: 217.59 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 230.12 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 241.72 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 250.50 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 257.92 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 270.50 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 283.27 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 290.62 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 302.94 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 314.94 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 327.23 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 339.47 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 351.80 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.t
ar.lz4: 363.37 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 375.45 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 387.73 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 400.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 410.64 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 424.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 436.30 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 448.78 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 461.19 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 473.17 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 482.25 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 492.55 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 503.95 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd6
4.tar.lz4: 516.19 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 528.69 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 541.23 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 542.91 MiB* 
	X Unable to start VM after repeated tries. Please try {{'minikube delete' if possible: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-07-17 20:25:44.440119761 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:195: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.9.0.2299511446.exe start -p stopped-upgrade-327000 --memory=2200 --vm-driver=docker 
version_upgrade_test.go:195: (dbg) Non-zero exit: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.9.0.2299511446.exe start -p stopped-upgrade-327000 --memory=2200 --vm-driver=docker : exit status 70 (4.022015896s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-327000] minikube v1.9.0 on Darwin 13.4.1
	  - MINIKUBE_LOCATION=16890
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16890-37879/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	  - KUBECONFIG=/var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/legacy_kubeconfig1277073503
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Updating the running docker "stopped-upgrade-327000" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:195: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.9.0.2299511446.exe start -p stopped-upgrade-327000 --memory=2200 --vm-driver=docker 
E0717 13:25:53.940041   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/skaffold-508000/client.crt: no such file or directory
version_upgrade_test.go:195: (dbg) Non-zero exit: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.9.0.2299511446.exe start -p stopped-upgrade-327000 --memory=2200 --vm-driver=docker : exit status 70 (4.128614155s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-327000] minikube v1.9.0 on Darwin 13.4.1
	  - MINIKUBE_LOCATION=16890
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16890-37879/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	  - KUBECONFIG=/var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/legacy_kubeconfig513361360
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Updating the running docker "stopped-upgrade-327000" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:201: legacy v1.9.0 start failed: exit status 70
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (47.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (257.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p old-k8s-version-378000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0
E0717 13:37:02.454675   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/auto-859000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p old-k8s-version-378000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0: exit status 109 (4m16.73742814s)

                                                
                                                
-- stdout --
	* [old-k8s-version-378000] minikube v1.30.1 on Darwin 13.4.1
	  - MINIKUBE_LOCATION=16890
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16890-37879/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16890-37879/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node old-k8s-version-378000 in cluster old-k8s-version-378000
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* Preparing Kubernetes v1.16.0 on Docker 24.0.4 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 13:37:00.895219   53894 out.go:296] Setting OutFile to fd 1 ...
	I0717 13:37:00.895417   53894 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 13:37:00.895422   53894 out.go:309] Setting ErrFile to fd 2...
	I0717 13:37:00.895426   53894 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 13:37:00.895618   53894 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/16890-37879/.minikube/bin
	I0717 13:37:00.897932   53894 out.go:303] Setting JSON to false
	I0717 13:37:00.919119   53894 start.go:128] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":16591,"bootTime":1689609629,"procs":399,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0717 13:37:00.919238   53894 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0717 13:37:00.942267   53894 out.go:177] * [old-k8s-version-378000] minikube v1.30.1 on Darwin 13.4.1
	I0717 13:37:00.985455   53894 notify.go:220] Checking for updates...
	I0717 13:37:01.012217   53894 out.go:177]   - MINIKUBE_LOCATION=16890
	I0717 13:37:01.054148   53894 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16890-37879/kubeconfig
	I0717 13:37:01.096342   53894 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0717 13:37:01.139157   53894 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 13:37:01.182294   53894 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16890-37879/.minikube
	I0717 13:37:01.225203   53894 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 13:37:01.246704   53894 config.go:182] Loaded profile config "kubenet-859000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0717 13:37:01.246799   53894 driver.go:373] Setting default libvirt URI to qemu:///system
	I0717 13:37:01.303200   53894 docker.go:121] docker version: linux-24.0.2:Docker Desktop 4.21.1 (114176)
	I0717 13:37:01.303337   53894 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 13:37:01.407803   53894 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:59 OomKillDisable:false NGoroutines:68 SystemTime:2023-07-17 20:37:01.395972065 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:5.15.49-linuxkit-pr OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexS
erverAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=built
in name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Pat
h:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.6] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Command line tool for Docker Scout Vendor:Docker Inc. Version:0.16.1]] Warnings:<n
il>}}
	I0717 13:37:01.453309   53894 out.go:177] * Using the docker driver based on user configuration
	I0717 13:37:01.474422   53894 start.go:298] selected driver: docker
	I0717 13:37:01.474440   53894 start.go:880] validating driver "docker" against <nil>
	I0717 13:37:01.474455   53894 start.go:891] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 13:37:01.478476   53894 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 13:37:01.585145   53894 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:59 OomKillDisable:false NGoroutines:68 SystemTime:2023-07-17 20:37:01.574671534 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:5.15.49-linuxkit-pr OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexS
erverAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=built
in name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Pat
h:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.6] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Command line tool for Docker Scout Vendor:Docker Inc. Version:0.16.1]] Warnings:<n
il>}}
	I0717 13:37:01.585328   53894 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0717 13:37:01.585517   53894 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 13:37:01.606965   53894 out.go:177] * Using Docker Desktop driver with root privileges
	I0717 13:37:01.627735   53894 cni.go:84] Creating CNI manager for ""
	I0717 13:37:01.627788   53894 cni.go:168] CNI unnecessary in this configuration, recommending no CNI
	I0717 13:37:01.627806   53894 start_flags.go:319] config:
	{Name:old-k8s-version-378000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-378000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocke
t: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 13:37:01.664933   53894 out.go:177] * Starting control plane node old-k8s-version-378000 in cluster old-k8s-version-378000
	I0717 13:37:01.686871   53894 cache.go:122] Beginning downloading kic base image for docker with docker
	I0717 13:37:01.708794   53894 out.go:177] * Pulling base image ...
	I0717 13:37:01.753762   53894 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0717 13:37:01.753855   53894 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon
	I0717 13:37:01.753881   53894 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16890-37879/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0717 13:37:01.753905   53894 cache.go:57] Caching tarball of preloaded images
	I0717 13:37:01.754167   53894 preload.go:174] Found /Users/jenkins/minikube-integration/16890-37879/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0717 13:37:01.754193   53894 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0717 13:37:01.755212   53894 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/old-k8s-version-378000/config.json ...
	I0717 13:37:01.755334   53894 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/old-k8s-version-378000/config.json: {Name:mk4706edef70dff77fd69cdb25a9366fc972e7a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 13:37:01.806730   53894 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon, skipping pull
	I0717 13:37:01.806766   53894 cache.go:145] gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 exists in daemon, skipping load
	I0717 13:37:01.806788   53894 cache.go:195] Successfully downloaded all kic artifacts
	I0717 13:37:01.806835   53894 start.go:365] acquiring machines lock for old-k8s-version-378000: {Name:mk1fa5bdcb933442ff3b09d713656e27b57c768b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 13:37:01.806997   53894 start.go:369] acquired machines lock for "old-k8s-version-378000" in 150.88µs
	I0717 13:37:01.807028   53894 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-378000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-378000 Namespace:default APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0717 13:37:01.807116   53894 start.go:125] createHost starting for "" (driver="docker")
	I0717 13:37:01.817175   53894 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0717 13:37:01.817375   53894 start.go:159] libmachine.API.Create for "old-k8s-version-378000" (driver="docker")
	I0717 13:37:01.817417   53894 client.go:168] LocalClient.Create starting
	I0717 13:37:01.817527   53894 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16890-37879/.minikube/certs/ca.pem
	I0717 13:37:01.817563   53894 main.go:141] libmachine: Decoding PEM data...
	I0717 13:37:01.817580   53894 main.go:141] libmachine: Parsing certificate...
	I0717 13:37:01.817658   53894 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16890-37879/.minikube/certs/cert.pem
	I0717 13:37:01.817683   53894 main.go:141] libmachine: Decoding PEM data...
	I0717 13:37:01.817692   53894 main.go:141] libmachine: Parsing certificate...
	I0717 13:37:01.825449   53894 cli_runner.go:164] Run: docker network inspect old-k8s-version-378000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0717 13:37:01.878636   53894 cli_runner.go:211] docker network inspect old-k8s-version-378000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0717 13:37:01.878753   53894 network_create.go:281] running [docker network inspect old-k8s-version-378000] to gather additional debugging logs...
	I0717 13:37:01.878771   53894 cli_runner.go:164] Run: docker network inspect old-k8s-version-378000
	W0717 13:37:01.928426   53894 cli_runner.go:211] docker network inspect old-k8s-version-378000 returned with exit code 1
	I0717 13:37:01.928454   53894 network_create.go:284] error running [docker network inspect old-k8s-version-378000]: docker network inspect old-k8s-version-378000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network old-k8s-version-378000 not found
	I0717 13:37:01.928474   53894 network_create.go:286] output of [docker network inspect old-k8s-version-378000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network old-k8s-version-378000 not found
	
	** /stderr **
	I0717 13:37:01.928571   53894 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0717 13:37:02.071794   53894 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0717 13:37:02.072138   53894 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000e4a4a0}
	I0717 13:37:02.072152   53894 network_create.go:123] attempt to create docker network old-k8s-version-378000 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 65535 ...
	I0717 13:37:02.072230   53894 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-378000 old-k8s-version-378000
	W0717 13:37:02.121040   53894 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-378000 old-k8s-version-378000 returned with exit code 1
	W0717 13:37:02.121075   53894 network_create.go:148] failed to create docker network old-k8s-version-378000 192.168.58.0/24 with gateway 192.168.58.1 and mtu of 65535: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-378000 old-k8s-version-378000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0717 13:37:02.121093   53894 network_create.go:115] failed to create docker network old-k8s-version-378000 192.168.58.0/24, will retry: subnet is taken
	I0717 13:37:02.122419   53894 network.go:212] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0717 13:37:02.122726   53894 network.go:209] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000e4b630}
	I0717 13:37:02.122740   53894 network_create.go:123] attempt to create docker network old-k8s-version-378000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 65535 ...
	I0717 13:37:02.122814   53894 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-378000 old-k8s-version-378000
	I0717 13:37:02.204283   53894 network_create.go:107] docker network old-k8s-version-378000 192.168.67.0/24 created
	I0717 13:37:02.204325   53894 kic.go:117] calculated static IP "192.168.67.2" for the "old-k8s-version-378000" container
	I0717 13:37:02.204461   53894 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0717 13:37:02.252869   53894 cli_runner.go:164] Run: docker volume create old-k8s-version-378000 --label name.minikube.sigs.k8s.io=old-k8s-version-378000 --label created_by.minikube.sigs.k8s.io=true
	I0717 13:37:02.302729   53894 oci.go:103] Successfully created a docker volume old-k8s-version-378000
	I0717 13:37:02.302859   53894 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-378000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-378000 --entrypoint /usr/bin/test -v old-k8s-version-378000:/var gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -d /var/lib
	I0717 13:37:02.686565   53894 oci.go:107] Successfully prepared a docker volume old-k8s-version-378000
	I0717 13:37:02.686601   53894 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0717 13:37:02.686615   53894 kic.go:190] Starting extracting preloaded images to volume ...
	I0717 13:37:02.686733   53894 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/16890-37879/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-378000:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -I lz4 -xf /preloaded.tar -C /extractDir
	I0717 13:37:05.428494   53894 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/16890-37879/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-378000:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -I lz4 -xf /preloaded.tar -C /extractDir: (2.741664668s)
	I0717 13:37:05.428524   53894 kic.go:199] duration metric: took 2.741899 seconds to extract preloaded images to volume
	I0717 13:37:05.428648   53894 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0717 13:37:05.526232   53894 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-378000 --name old-k8s-version-378000 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-378000 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-378000 --network old-k8s-version-378000 --ip 192.168.67.2 --volume old-k8s-version-378000:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631
	I0717 13:37:05.801871   53894 cli_runner.go:164] Run: docker container inspect old-k8s-version-378000 --format={{.State.Running}}
	I0717 13:37:05.852936   53894 cli_runner.go:164] Run: docker container inspect old-k8s-version-378000 --format={{.State.Status}}
	I0717 13:37:05.932596   53894 cli_runner.go:164] Run: docker exec old-k8s-version-378000 stat /var/lib/dpkg/alternatives/iptables
	I0717 13:37:06.031184   53894 oci.go:144] the created container "old-k8s-version-378000" has a running status.
	I0717 13:37:06.031226   53894 kic.go:221] Creating ssh key for kic: /Users/jenkins/minikube-integration/16890-37879/.minikube/machines/old-k8s-version-378000/id_rsa...
	I0717 13:37:06.118072   53894 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/16890-37879/.minikube/machines/old-k8s-version-378000/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0717 13:37:06.177222   53894 cli_runner.go:164] Run: docker container inspect old-k8s-version-378000 --format={{.State.Status}}
	I0717 13:37:06.228392   53894 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0717 13:37:06.228417   53894 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-378000 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0717 13:37:06.320826   53894 cli_runner.go:164] Run: docker container inspect old-k8s-version-378000 --format={{.State.Status}}
	I0717 13:37:06.371998   53894 machine.go:88] provisioning docker machine ...
	I0717 13:37:06.372042   53894 ubuntu.go:169] provisioning hostname "old-k8s-version-378000"
	I0717 13:37:06.372138   53894 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-378000
	I0717 13:37:06.421320   53894 main.go:141] libmachine: Using SSH client type: native
	I0717 13:37:06.421698   53894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140d2c0] 0x1410360 <nil>  [] 0s} 127.0.0.1 59146 <nil> <nil>}
	I0717 13:37:06.421714   53894 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-378000 && echo "old-k8s-version-378000" | sudo tee /etc/hostname
	I0717 13:37:06.559214   53894 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-378000
	
	I0717 13:37:06.559313   53894 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-378000
	I0717 13:37:06.608682   53894 main.go:141] libmachine: Using SSH client type: native
	I0717 13:37:06.609026   53894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140d2c0] 0x1410360 <nil>  [] 0s} 127.0.0.1 59146 <nil> <nil>}
	I0717 13:37:06.609040   53894 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-378000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-378000/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-378000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 13:37:06.737833   53894 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 13:37:06.737861   53894 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/16890-37879/.minikube CaCertPath:/Users/jenkins/minikube-integration/16890-37879/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/16890-37879/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/16890-37879/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/16890-37879/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/16890-37879/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/16890-37879/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/16890-37879/.minikube}
	I0717 13:37:06.737880   53894 ubuntu.go:177] setting up certificates
	I0717 13:37:06.737885   53894 provision.go:83] configureAuth start
	I0717 13:37:06.737958   53894 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-378000
	I0717 13:37:06.787713   53894 provision.go:138] copyHostCerts
	I0717 13:37:06.787819   53894 exec_runner.go:144] found /Users/jenkins/minikube-integration/16890-37879/.minikube/ca.pem, removing ...
	I0717 13:37:06.787829   53894 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/16890-37879/.minikube/ca.pem
	I0717 13:37:06.787957   53894 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16890-37879/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/16890-37879/.minikube/ca.pem (1078 bytes)
	I0717 13:37:06.788167   53894 exec_runner.go:144] found /Users/jenkins/minikube-integration/16890-37879/.minikube/cert.pem, removing ...
	I0717 13:37:06.788173   53894 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/16890-37879/.minikube/cert.pem
	I0717 13:37:06.788237   53894 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16890-37879/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/16890-37879/.minikube/cert.pem (1123 bytes)
	I0717 13:37:06.788391   53894 exec_runner.go:144] found /Users/jenkins/minikube-integration/16890-37879/.minikube/key.pem, removing ...
	I0717 13:37:06.788397   53894 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/16890-37879/.minikube/key.pem
	I0717 13:37:06.788457   53894 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16890-37879/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/16890-37879/.minikube/key.pem (1679 bytes)
	I0717 13:37:06.788584   53894 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/16890-37879/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/16890-37879/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/16890-37879/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-378000 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube old-k8s-version-378000]
	I0717 13:37:06.842014   53894 provision.go:172] copyRemoteCerts
	I0717 13:37:06.842077   53894 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 13:37:06.842131   53894 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-378000
	I0717 13:37:06.892053   53894 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59146 SSHKeyPath:/Users/jenkins/minikube-integration/16890-37879/.minikube/machines/old-k8s-version-378000/id_rsa Username:docker}
	I0717 13:37:06.984663   53894 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16890-37879/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0717 13:37:07.006194   53894 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16890-37879/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0717 13:37:07.028018   53894 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16890-37879/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 13:37:07.049455   53894 provision.go:86] duration metric: configureAuth took 311.555132ms
	I0717 13:37:07.049469   53894 ubuntu.go:193] setting minikube options for container-runtime
	I0717 13:37:07.049629   53894 config.go:182] Loaded profile config "old-k8s-version-378000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0717 13:37:07.049723   53894 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-378000
	I0717 13:37:07.099594   53894 main.go:141] libmachine: Using SSH client type: native
	I0717 13:37:07.099958   53894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140d2c0] 0x1410360 <nil>  [] 0s} 127.0.0.1 59146 <nil> <nil>}
	I0717 13:37:07.099970   53894 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0717 13:37:07.228163   53894 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0717 13:37:07.228177   53894 ubuntu.go:71] root file system type: overlay
	I0717 13:37:07.228260   53894 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0717 13:37:07.228346   53894 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-378000
	I0717 13:37:07.278564   53894 main.go:141] libmachine: Using SSH client type: native
	I0717 13:37:07.278923   53894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140d2c0] 0x1410360 <nil>  [] 0s} 127.0.0.1 59146 <nil> <nil>}
	I0717 13:37:07.278977   53894 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0717 13:37:07.418860   53894 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0717 13:37:07.418984   53894 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-378000
	I0717 13:37:07.469033   53894 main.go:141] libmachine: Using SSH client type: native
	I0717 13:37:07.469375   53894 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140d2c0] 0x1410360 <nil>  [] 0s} 127.0.0.1 59146 <nil> <nil>}
	I0717 13:37:07.469388   53894 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0717 13:37:08.119988   53894 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2023-07-07 14:50:55.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-07-17 20:37:07.415979828 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0717 13:37:08.120013   53894 machine.go:91] provisioned docker machine in 1.747989081s
	I0717 13:37:08.120021   53894 client.go:171] LocalClient.Create took 6.302582077s
	I0717 13:37:08.120040   53894 start.go:167] duration metric: libmachine.API.Create for "old-k8s-version-378000" took 6.302648795s
	I0717 13:37:08.120049   53894 start.go:300] post-start starting for "old-k8s-version-378000" (driver="docker")
	I0717 13:37:08.120059   53894 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 13:37:08.120131   53894 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 13:37:08.120200   53894 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-378000
	I0717 13:37:08.170074   53894 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59146 SSHKeyPath:/Users/jenkins/minikube-integration/16890-37879/.minikube/machines/old-k8s-version-378000/id_rsa Username:docker}
	I0717 13:37:08.263657   53894 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 13:37:08.267731   53894 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0717 13:37:08.267759   53894 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0717 13:37:08.267768   53894 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0717 13:37:08.267773   53894 info.go:137] Remote host: Ubuntu 22.04.2 LTS
	I0717 13:37:08.267783   53894 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16890-37879/.minikube/addons for local assets ...
	I0717 13:37:08.267870   53894 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16890-37879/.minikube/files for local assets ...
	I0717 13:37:08.268055   53894 filesync.go:149] local asset: /Users/jenkins/minikube-integration/16890-37879/.minikube/files/etc/ssl/certs/383252.pem -> 383252.pem in /etc/ssl/certs
	I0717 13:37:08.268242   53894 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 13:37:08.276839   53894 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16890-37879/.minikube/files/etc/ssl/certs/383252.pem --> /etc/ssl/certs/383252.pem (1708 bytes)
	I0717 13:37:08.299141   53894 start.go:303] post-start completed in 179.072701ms
	I0717 13:37:08.299641   53894 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-378000
	I0717 13:37:08.350540   53894 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/old-k8s-version-378000/config.json ...
	I0717 13:37:08.350997   53894 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 13:37:08.351069   53894 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-378000
	I0717 13:37:08.400179   53894 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59146 SSHKeyPath:/Users/jenkins/minikube-integration/16890-37879/.minikube/machines/old-k8s-version-378000/id_rsa Username:docker}
	I0717 13:37:08.490053   53894 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0717 13:37:08.495719   53894 start.go:128] duration metric: createHost completed in 6.688577904s
	I0717 13:37:08.495737   53894 start.go:83] releasing machines lock for "old-k8s-version-378000", held for 6.688715292s
	I0717 13:37:08.495829   53894 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-378000
	I0717 13:37:08.545148   53894 ssh_runner.go:195] Run: cat /version.json
	I0717 13:37:08.545188   53894 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 13:37:08.545243   53894 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-378000
	I0717 13:37:08.545263   53894 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-378000
	I0717 13:37:08.596453   53894 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59146 SSHKeyPath:/Users/jenkins/minikube-integration/16890-37879/.minikube/machines/old-k8s-version-378000/id_rsa Username:docker}
	I0717 13:37:08.596451   53894 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59146 SSHKeyPath:/Users/jenkins/minikube-integration/16890-37879/.minikube/machines/old-k8s-version-378000/id_rsa Username:docker}
	W0717 13:37:08.686292   53894 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.31.0 -> Actual minikube version: v1.30.1
	! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.31.0 -> Actual minikube version: v1.30.1
	I0717 13:37:08.686372   53894 ssh_runner.go:195] Run: systemctl --version
	I0717 13:37:08.809870   53894 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0717 13:37:08.815766   53894 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0717 13:37:08.838776   53894 cni.go:236] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0717 13:37:08.838857   53894 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0717 13:37:08.854865   53894 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0717 13:37:08.870269   53894 cni.go:314] configured [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 13:37:08.870289   53894 start.go:469] detecting cgroup driver to use...
	I0717 13:37:08.870300   53894 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0717 13:37:08.870406   53894 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 13:37:08.885786   53894 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.1"|' /etc/containerd/config.toml"
	I0717 13:37:08.896605   53894 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0717 13:37:08.906419   53894 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0717 13:37:08.906489   53894 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0717 13:37:08.916787   53894 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0717 13:37:08.926460   53894 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0717 13:37:08.936442   53894 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0717 13:37:08.946125   53894 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 13:37:08.955704   53894 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0717 13:37:08.965523   53894 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 13:37:08.974123   53894 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 13:37:08.982612   53894 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 13:37:09.053791   53894 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0717 13:37:09.130460   53894 start.go:469] detecting cgroup driver to use...
	I0717 13:37:09.130481   53894 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0717 13:37:09.130551   53894 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0717 13:37:09.142454   53894 cruntime.go:276] skipping containerd shutdown because we are bound to it
	I0717 13:37:09.142526   53894 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0717 13:37:09.153724   53894 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 13:37:09.170019   53894 ssh_runner.go:195] Run: which cri-dockerd
	I0717 13:37:09.175109   53894 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0717 13:37:09.206139   53894 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0717 13:37:09.223437   53894 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0717 13:37:09.315581   53894 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0717 13:37:09.403756   53894 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0717 13:37:09.403771   53894 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0717 13:37:09.421976   53894 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 13:37:09.505739   53894 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0717 13:37:09.774175   53894 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0717 13:37:09.801117   53894 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0717 13:37:09.904096   53894 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 24.0.4 ...
	I0717 13:37:09.904234   53894 cli_runner.go:164] Run: docker exec -t old-k8s-version-378000 dig +short host.docker.internal
	I0717 13:37:10.018171   53894 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I0717 13:37:10.018303   53894 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I0717 13:37:10.023316   53894 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 13:37:10.034371   53894 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-378000
	I0717 13:37:10.083527   53894 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0717 13:37:10.083600   53894 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0717 13:37:10.104509   53894 docker.go:636] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0717 13:37:10.104522   53894 docker.go:642] registry.k8s.io/kube-apiserver:v1.16.0 wasn't preloaded
	I0717 13:37:10.104588   53894 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0717 13:37:10.113749   53894 ssh_runner.go:195] Run: which lz4
	I0717 13:37:10.118408   53894 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0717 13:37:10.122666   53894 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 13:37:10.122698   53894 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16890-37879/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (369789069 bytes)
	I0717 13:37:14.965329   53894 docker.go:600] Took 4.846984 seconds to copy over tarball
	I0717 13:37:14.965380   53894 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 13:37:17.186995   53894 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.221594303s)
	I0717 13:37:17.187012   53894 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 13:37:17.235506   53894 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0717 13:37:17.246085   53894 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2499 bytes)
	I0717 13:37:17.264293   53894 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 13:37:17.332419   53894 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0717 13:37:18.092735   53894 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0717 13:37:18.113913   53894 docker.go:636] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0717 13:37:18.113931   53894 docker.go:642] registry.k8s.io/kube-apiserver:v1.16.0 wasn't preloaded
	I0717 13:37:18.113940   53894 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0717 13:37:18.121580   53894 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0717 13:37:18.121979   53894 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0717 13:37:18.122027   53894 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0717 13:37:18.122128   53894 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0717 13:37:18.122161   53894 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0717 13:37:18.122168   53894 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 13:37:18.122185   53894 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0717 13:37:18.122188   53894 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0717 13:37:18.128240   53894 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0717 13:37:18.128262   53894 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0717 13:37:18.128328   53894 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 13:37:18.128396   53894 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0717 13:37:18.128480   53894 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0717 13:37:18.128530   53894 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0717 13:37:18.128557   53894 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0717 13:37:18.128580   53894 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0717 13:37:19.288843   53894 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0717 13:37:19.313207   53894 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0717 13:37:19.313270   53894 docker.go:316] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0717 13:37:19.313361   53894 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0717 13:37:19.336507   53894 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/16890-37879/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0717 13:37:19.474564   53894 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0717 13:37:19.496596   53894 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0717 13:37:19.496623   53894 docker.go:316] Removing image: registry.k8s.io/coredns:1.6.2
	I0717 13:37:19.496672   53894 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.6.2
	I0717 13:37:19.520546   53894 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/16890-37879/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0717 13:37:19.817749   53894 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0717 13:37:19.836889   53894 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0717 13:37:19.836918   53894 docker.go:316] Removing image: registry.k8s.io/pause:3.1
	I0717 13:37:19.836967   53894 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.1
	I0717 13:37:19.858448   53894 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/16890-37879/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0717 13:37:19.958772   53894 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 13:37:20.066267   53894 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0717 13:37:20.091794   53894 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0717 13:37:20.091832   53894 docker.go:316] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0717 13:37:20.091920   53894 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0717 13:37:20.116002   53894 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/16890-37879/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0717 13:37:20.315323   53894 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0717 13:37:20.338199   53894 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0717 13:37:20.338231   53894 docker.go:316] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0717 13:37:20.338290   53894 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0717 13:37:20.361802   53894 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/16890-37879/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0717 13:37:20.640880   53894 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0717 13:37:20.662137   53894 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0717 13:37:20.662163   53894 docker.go:316] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0717 13:37:20.662204   53894 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.3.15-0
	I0717 13:37:20.682731   53894 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/16890-37879/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0717 13:37:20.974511   53894 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0717 13:37:20.997185   53894 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0717 13:37:20.997213   53894 docker.go:316] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0717 13:37:20.997271   53894 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.16.0
	I0717 13:37:21.016263   53894 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/16890-37879/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0717 13:37:21.016314   53894 cache_images.go:92] LoadImages completed in 2.902355024s
	W0717 13:37:21.016364   53894 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/16890-37879/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0: no such file or directory
	X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/16890-37879/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0: no such file or directory
	I0717 13:37:21.016447   53894 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0717 13:37:21.071743   53894 cni.go:84] Creating CNI manager for ""
	I0717 13:37:21.071778   53894 cni.go:168] CNI unnecessary in this configuration, recommending no CNI
	I0717 13:37:21.071802   53894 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0717 13:37:21.071819   53894 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-378000 NodeName:old-k8s-version-378000 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0717 13:37:21.071916   53894 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "old-k8s-version-378000"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-378000
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.67.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 13:37:21.071980   53894 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=old-k8s-version-378000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-378000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0717 13:37:21.072064   53894 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0717 13:37:21.081728   53894 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 13:37:21.081802   53894 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 13:37:21.091603   53894 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (348 bytes)
	I0717 13:37:21.109940   53894 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 13:37:21.126615   53894 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2174 bytes)
	I0717 13:37:21.143298   53894 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0717 13:37:21.147879   53894 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 13:37:21.158872   53894 certs.go:56] Setting up /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/old-k8s-version-378000 for IP: 192.168.67.2
	I0717 13:37:21.158891   53894 certs.go:190] acquiring lock for shared ca certs: {Name:mkcb761e9710dc67a00cbdee9d78e096db7e9bb9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 13:37:21.159076   53894 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/16890-37879/.minikube/ca.key
	I0717 13:37:21.159136   53894 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/16890-37879/.minikube/proxy-client-ca.key
	I0717 13:37:21.159185   53894 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/old-k8s-version-378000/client.key
	I0717 13:37:21.159197   53894 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/old-k8s-version-378000/client.crt with IP's: []
	I0717 13:37:21.475547   53894 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/old-k8s-version-378000/client.crt ...
	I0717 13:37:21.475568   53894 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/old-k8s-version-378000/client.crt: {Name:mk127283dbd0c2f1f3370862de35894d692e7a46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 13:37:21.475938   53894 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/old-k8s-version-378000/client.key ...
	I0717 13:37:21.475948   53894 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/old-k8s-version-378000/client.key: {Name:mk25f59a3d3f2e34cdf61f8653f0ae64e59deceb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 13:37:21.476187   53894 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/old-k8s-version-378000/apiserver.key.c7fa3a9e
	I0717 13:37:21.476208   53894 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/old-k8s-version-378000/apiserver.crt.c7fa3a9e with IP's: [192.168.67.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0717 13:37:21.551900   53894 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/old-k8s-version-378000/apiserver.crt.c7fa3a9e ...
	I0717 13:37:21.551913   53894 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/old-k8s-version-378000/apiserver.crt.c7fa3a9e: {Name:mk698d28ea167d8dd079ab5c0e3a196882512ea5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 13:37:21.552214   53894 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/old-k8s-version-378000/apiserver.key.c7fa3a9e ...
	I0717 13:37:21.552222   53894 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/old-k8s-version-378000/apiserver.key.c7fa3a9e: {Name:mk8f93d0efc5e23ced6a8dbe10a5080173627e8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 13:37:21.552410   53894 certs.go:337] copying /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/old-k8s-version-378000/apiserver.crt.c7fa3a9e -> /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/old-k8s-version-378000/apiserver.crt
	I0717 13:37:21.552573   53894 certs.go:341] copying /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/old-k8s-version-378000/apiserver.key.c7fa3a9e -> /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/old-k8s-version-378000/apiserver.key
	I0717 13:37:21.552733   53894 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/old-k8s-version-378000/proxy-client.key
	I0717 13:37:21.552746   53894 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/old-k8s-version-378000/proxy-client.crt with IP's: []
	I0717 13:37:21.669087   53894 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/old-k8s-version-378000/proxy-client.crt ...
	I0717 13:37:21.669100   53894 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/old-k8s-version-378000/proxy-client.crt: {Name:mk5767ef62185dd14948b457a6b5c56b8a1b97aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 13:37:21.669408   53894 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/old-k8s-version-378000/proxy-client.key ...
	I0717 13:37:21.669416   53894 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/old-k8s-version-378000/proxy-client.key: {Name:mkbdb4b6e12da121e0c2d626d9fdb3a160d44d25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 13:37:21.669844   53894 certs.go:437] found cert: /Users/jenkins/minikube-integration/16890-37879/.minikube/certs/Users/jenkins/minikube-integration/16890-37879/.minikube/certs/38325.pem (1338 bytes)
	W0717 13:37:21.669893   53894 certs.go:433] ignoring /Users/jenkins/minikube-integration/16890-37879/.minikube/certs/Users/jenkins/minikube-integration/16890-37879/.minikube/certs/38325_empty.pem, impossibly tiny 0 bytes
	I0717 13:37:21.669906   53894 certs.go:437] found cert: /Users/jenkins/minikube-integration/16890-37879/.minikube/certs/Users/jenkins/minikube-integration/16890-37879/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 13:37:21.669944   53894 certs.go:437] found cert: /Users/jenkins/minikube-integration/16890-37879/.minikube/certs/Users/jenkins/minikube-integration/16890-37879/.minikube/certs/ca.pem (1078 bytes)
	I0717 13:37:21.669974   53894 certs.go:437] found cert: /Users/jenkins/minikube-integration/16890-37879/.minikube/certs/Users/jenkins/minikube-integration/16890-37879/.minikube/certs/cert.pem (1123 bytes)
	I0717 13:37:21.670004   53894 certs.go:437] found cert: /Users/jenkins/minikube-integration/16890-37879/.minikube/certs/Users/jenkins/minikube-integration/16890-37879/.minikube/certs/key.pem (1679 bytes)
	I0717 13:37:21.670074   53894 certs.go:437] found cert: /Users/jenkins/minikube-integration/16890-37879/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/16890-37879/.minikube/files/etc/ssl/certs/383252.pem (1708 bytes)
	I0717 13:37:21.670590   53894 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/old-k8s-version-378000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0717 13:37:21.694298   53894 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/old-k8s-version-378000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 13:37:21.717389   53894 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/old-k8s-version-378000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 13:37:21.741683   53894 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/old-k8s-version-378000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0717 13:37:21.764672   53894 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16890-37879/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 13:37:21.788650   53894 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16890-37879/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 13:37:21.812014   53894 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16890-37879/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 13:37:21.836102   53894 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16890-37879/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0717 13:37:21.859535   53894 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16890-37879/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 13:37:21.885256   53894 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16890-37879/.minikube/certs/38325.pem --> /usr/share/ca-certificates/38325.pem (1338 bytes)
	I0717 13:37:21.909483   53894 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16890-37879/.minikube/files/etc/ssl/certs/383252.pem --> /usr/share/ca-certificates/383252.pem (1708 bytes)
	I0717 13:37:21.931765   53894 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 13:37:21.947973   53894 ssh_runner.go:195] Run: openssl version
	I0717 13:37:21.953903   53894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/383252.pem && ln -fs /usr/share/ca-certificates/383252.pem /etc/ssl/certs/383252.pem"
	I0717 13:37:21.963849   53894 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/383252.pem
	I0717 13:37:21.968793   53894 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jul 17 19:49 /usr/share/ca-certificates/383252.pem
	I0717 13:37:21.968853   53894 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/383252.pem
	I0717 13:37:21.976271   53894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/383252.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 13:37:21.986707   53894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 13:37:21.996705   53894 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 13:37:22.001426   53894 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jul 17 19:44 /usr/share/ca-certificates/minikubeCA.pem
	I0717 13:37:22.001472   53894 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 13:37:22.008679   53894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 13:37:22.019498   53894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/38325.pem && ln -fs /usr/share/ca-certificates/38325.pem /etc/ssl/certs/38325.pem"
	I0717 13:37:22.029942   53894 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/38325.pem
	I0717 13:37:22.034976   53894 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jul 17 19:49 /usr/share/ca-certificates/38325.pem
	I0717 13:37:22.035043   53894 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/38325.pem
	I0717 13:37:22.043196   53894 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/38325.pem /etc/ssl/certs/51391683.0"
	I0717 13:37:22.053406   53894 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0717 13:37:22.058047   53894 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0717 13:37:22.058091   53894 kubeadm.go:404] StartCluster: {Name:old-k8s-version-378000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-378000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs
:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 13:37:22.058180   53894 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0717 13:37:22.078164   53894 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 13:37:22.087819   53894 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 13:37:22.097237   53894 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0717 13:37:22.097299   53894 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 13:37:22.107680   53894 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 13:37:22.107713   53894 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0717 13:37:22.163055   53894 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0717 13:37:22.163118   53894 kubeadm.go:322] [preflight] Running pre-flight checks
	I0717 13:37:22.427585   53894 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 13:37:22.427683   53894 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 13:37:22.427777   53894 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 13:37:22.620190   53894 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 13:37:22.620978   53894 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 13:37:22.628338   53894 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0717 13:37:22.695569   53894 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 13:37:22.753462   53894 out.go:204]   - Generating certificates and keys ...
	I0717 13:37:22.753581   53894 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0717 13:37:22.753710   53894 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0717 13:37:23.004716   53894 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0717 13:37:23.051824   53894 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0717 13:37:23.309527   53894 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0717 13:37:23.497744   53894 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0717 13:37:23.616543   53894 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0717 13:37:23.616666   53894 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [old-k8s-version-378000 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	I0717 13:37:23.722323   53894 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0717 13:37:23.722427   53894 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [old-k8s-version-378000 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	I0717 13:37:23.800425   53894 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0717 13:37:24.225050   53894 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0717 13:37:24.460381   53894 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0717 13:37:24.460462   53894 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 13:37:24.504182   53894 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 13:37:24.557199   53894 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 13:37:24.668349   53894 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 13:37:24.809826   53894 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 13:37:24.811070   53894 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 13:37:24.832499   53894 out.go:204]   - Booting up control plane ...
	I0717 13:37:24.832635   53894 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 13:37:24.832768   53894 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 13:37:24.832845   53894 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 13:37:24.832949   53894 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 13:37:24.833141   53894 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0717 13:38:04.821613   53894 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0717 13:38:04.822469   53894 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 13:38:04.822718   53894 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 13:38:09.823407   53894 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 13:38:09.823615   53894 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 13:38:19.825201   53894 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 13:38:19.825468   53894 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 13:38:39.826023   53894 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 13:38:39.826236   53894 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 13:39:19.826861   53894 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 13:39:19.827034   53894 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 13:39:19.827048   53894 kubeadm.go:322] 
	I0717 13:39:19.827110   53894 kubeadm.go:322] Unfortunately, an error has occurred:
	I0717 13:39:19.827176   53894 kubeadm.go:322] 	timed out waiting for the condition
	I0717 13:39:19.827187   53894 kubeadm.go:322] 
	I0717 13:39:19.827226   53894 kubeadm.go:322] This error is likely caused by:
	I0717 13:39:19.827256   53894 kubeadm.go:322] 	- The kubelet is not running
	I0717 13:39:19.827337   53894 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0717 13:39:19.827346   53894 kubeadm.go:322] 
	I0717 13:39:19.827423   53894 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0717 13:39:19.827456   53894 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0717 13:39:19.827490   53894 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0717 13:39:19.827496   53894 kubeadm.go:322] 
	I0717 13:39:19.827605   53894 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0717 13:39:19.827691   53894 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0717 13:39:19.827756   53894 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0717 13:39:19.827790   53894 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0717 13:39:19.827846   53894 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0717 13:39:19.827877   53894 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0717 13:39:19.830169   53894 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0717 13:39:19.830279   53894 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0717 13:39:19.830392   53894 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.4. Latest validated version: 18.09
	I0717 13:39:19.830478   53894 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 13:39:19.830554   53894 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0717 13:39:19.830620   53894 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	W0717 13:39:19.830703   53894 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [old-k8s-version-378000 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [old-k8s-version-378000 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.4. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [old-k8s-version-378000 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [old-k8s-version-378000 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.4. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0717 13:39:19.830736   53894 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0717 13:39:20.251446   53894 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 13:39:20.263905   53894 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0717 13:39:20.263956   53894 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 13:39:20.273111   53894 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 13:39:20.273140   53894 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0717 13:39:20.324040   53894 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0717 13:39:20.324089   53894 kubeadm.go:322] [preflight] Running pre-flight checks
	I0717 13:39:20.581703   53894 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 13:39:20.581815   53894 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 13:39:20.581897   53894 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 13:39:20.768854   53894 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 13:39:20.769867   53894 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 13:39:20.776554   53894 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0717 13:39:20.841580   53894 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 13:39:20.863177   53894 out.go:204]   - Generating certificates and keys ...
	I0717 13:39:20.863314   53894 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0717 13:39:20.863517   53894 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0717 13:39:20.863666   53894 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0717 13:39:20.863755   53894 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0717 13:39:20.863864   53894 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0717 13:39:20.863920   53894 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0717 13:39:20.863995   53894 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0717 13:39:20.864102   53894 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0717 13:39:20.864179   53894 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0717 13:39:20.864308   53894 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0717 13:39:20.864359   53894 kubeadm.go:322] [certs] Using the existing "sa" key
	I0717 13:39:20.864445   53894 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 13:39:21.204051   53894 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 13:39:21.620596   53894 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 13:39:21.816709   53894 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 13:39:22.030990   53894 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 13:39:22.031558   53894 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 13:39:22.052118   53894 out.go:204]   - Booting up control plane ...
	I0717 13:39:22.052231   53894 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 13:39:22.052341   53894 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 13:39:22.052431   53894 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 13:39:22.052522   53894 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 13:39:22.052704   53894 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0717 13:40:02.048704   53894 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0717 13:40:02.049692   53894 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 13:40:02.049886   53894 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 13:40:07.052773   53894 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 13:40:07.053006   53894 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 13:40:17.056560   53894 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 13:40:17.056779   53894 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 13:40:37.058830   53894 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 13:40:37.059043   53894 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 13:41:17.061780   53894 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 13:41:17.062000   53894 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 13:41:17.062019   53894 kubeadm.go:322] 
	I0717 13:41:17.062071   53894 kubeadm.go:322] Unfortunately, an error has occurred:
	I0717 13:41:17.062128   53894 kubeadm.go:322] 	timed out waiting for the condition
	I0717 13:41:17.062142   53894 kubeadm.go:322] 
	I0717 13:41:17.062190   53894 kubeadm.go:322] This error is likely caused by:
	I0717 13:41:17.062229   53894 kubeadm.go:322] 	- The kubelet is not running
	I0717 13:41:17.062368   53894 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0717 13:41:17.062378   53894 kubeadm.go:322] 
	I0717 13:41:17.062525   53894 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0717 13:41:17.062564   53894 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0717 13:41:17.062599   53894 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0717 13:41:17.062605   53894 kubeadm.go:322] 
	I0717 13:41:17.062735   53894 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0717 13:41:17.062882   53894 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0717 13:41:17.062970   53894 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0717 13:41:17.063014   53894 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0717 13:41:17.063080   53894 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0717 13:41:17.063108   53894 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0717 13:41:17.064739   53894 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0717 13:41:17.064809   53894 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0717 13:41:17.064934   53894 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.4. Latest validated version: 18.09
	I0717 13:41:17.065052   53894 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 13:41:17.065129   53894 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0717 13:41:17.065190   53894 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0717 13:41:17.065225   53894 kubeadm.go:406] StartCluster complete in 3m54.991440132s
	I0717 13:41:17.065323   53894 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 13:41:17.085598   53894 logs.go:284] 0 containers: []
	W0717 13:41:17.085611   53894 logs.go:286] No container was found matching "kube-apiserver"
	I0717 13:41:17.085684   53894 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 13:41:17.104591   53894 logs.go:284] 0 containers: []
	W0717 13:41:17.104605   53894 logs.go:286] No container was found matching "etcd"
	I0717 13:41:17.104676   53894 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 13:41:17.123359   53894 logs.go:284] 0 containers: []
	W0717 13:41:17.123373   53894 logs.go:286] No container was found matching "coredns"
	I0717 13:41:17.123447   53894 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 13:41:17.143673   53894 logs.go:284] 0 containers: []
	W0717 13:41:17.143685   53894 logs.go:286] No container was found matching "kube-scheduler"
	I0717 13:41:17.143767   53894 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 13:41:17.164317   53894 logs.go:284] 0 containers: []
	W0717 13:41:17.164331   53894 logs.go:286] No container was found matching "kube-proxy"
	I0717 13:41:17.164406   53894 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 13:41:17.183348   53894 logs.go:284] 0 containers: []
	W0717 13:41:17.183361   53894 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 13:41:17.183436   53894 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 13:41:17.202224   53894 logs.go:284] 0 containers: []
	W0717 13:41:17.202239   53894 logs.go:286] No container was found matching "kindnet"
	I0717 13:41:17.202246   53894 logs.go:123] Gathering logs for kubelet ...
	I0717 13:41:17.202254   53894 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 13:41:17.240713   53894 logs.go:123] Gathering logs for dmesg ...
	I0717 13:41:17.240731   53894 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 13:41:17.255113   53894 logs.go:123] Gathering logs for describe nodes ...
	I0717 13:41:17.255128   53894 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 13:41:17.310359   53894 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 13:41:17.310373   53894 logs.go:123] Gathering logs for Docker ...
	I0717 13:41:17.310379   53894 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 13:41:17.326951   53894 logs.go:123] Gathering logs for container status ...
	I0717 13:41:17.326967   53894 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0717 13:41:17.379061   53894 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.4. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0717 13:41:17.379079   53894 out.go:239] * 
	* 
	W0717 13:41:17.379153   53894 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.4. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.4. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0717 13:41:17.379182   53894 out.go:239] * 
	* 
	W0717 13:41:17.379867   53894 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 13:41:17.457919   53894 out.go:177] 
	W0717 13:41:17.499942   53894 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.4. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.4. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0717 13:41:17.500005   53894 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0717 13:41:17.500027   53894 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0717 13:41:17.541918   53894 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-amd64 start -p old-k8s-version-378000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/FirstStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-378000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-378000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b5cf72528f711ac4193d0a9a8f59a539b1c9a82e56b492c4504ac5a55d9b9666",
	        "Created": "2023-07-17T20:37:05.574347632Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 718537,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-07-17T20:37:05.794684114Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6cc01e6091959400f260dc442708e7c71630b58dab1f7c344cb00926bd84950",
	        "ResolvConfPath": "/var/lib/docker/containers/b5cf72528f711ac4193d0a9a8f59a539b1c9a82e56b492c4504ac5a55d9b9666/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b5cf72528f711ac4193d0a9a8f59a539b1c9a82e56b492c4504ac5a55d9b9666/hostname",
	        "HostsPath": "/var/lib/docker/containers/b5cf72528f711ac4193d0a9a8f59a539b1c9a82e56b492c4504ac5a55d9b9666/hosts",
	        "LogPath": "/var/lib/docker/containers/b5cf72528f711ac4193d0a9a8f59a539b1c9a82e56b492c4504ac5a55d9b9666/b5cf72528f711ac4193d0a9a8f59a539b1c9a82e56b492c4504ac5a55d9b9666-json.log",
	        "Name": "/old-k8s-version-378000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-378000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-378000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/92e903a37111c1be0b41a42b0b482279b759da84e66b3f0a99d79bad046a816b-init/diff:/var/lib/docker/overlay2/e56ac82b253363a3e2a8ef1d32b035837a0160e70c091e0204df14a88b273cb0/diff",
	                "MergedDir": "/var/lib/docker/overlay2/92e903a37111c1be0b41a42b0b482279b759da84e66b3f0a99d79bad046a816b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/92e903a37111c1be0b41a42b0b482279b759da84e66b3f0a99d79bad046a816b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/92e903a37111c1be0b41a42b0b482279b759da84e66b3f0a99d79bad046a816b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-378000",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-378000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-378000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-378000",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-378000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c95dcad38d994f0eefc35e3377946f33ace9c263664067236394916b531fb3c1",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59146"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59147"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59148"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59149"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59150"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/c95dcad38d99",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-378000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "b5cf72528f71",
	                        "old-k8s-version-378000"
	                    ],
	                    "NetworkID": "c3d985d4d6f8171a299a582295ee1a9b4b599d36307c61b13f7920634885fa85",
	                    "EndpointID": "c253e59524ffcac002d9239041f417095609bffd96e4ac16dab03e933a4af6a9",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-378000 -n old-k8s-version-378000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-378000 -n old-k8s-version-378000: exit status 6 (355.918478ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 13:41:18.042187   54988 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-378000" does not appear in /Users/jenkins/minikube-integration/16890-37879/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-378000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (257.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.86s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-378000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-378000 create -f testdata/busybox.yaml: exit status 1 (34.791206ms)

                                                
                                                
** stderr ** 
	error: no openapi getter

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-378000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-378000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-378000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b5cf72528f711ac4193d0a9a8f59a539b1c9a82e56b492c4504ac5a55d9b9666",
	        "Created": "2023-07-17T20:37:05.574347632Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 718537,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-07-17T20:37:05.794684114Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6cc01e6091959400f260dc442708e7c71630b58dab1f7c344cb00926bd84950",
	        "ResolvConfPath": "/var/lib/docker/containers/b5cf72528f711ac4193d0a9a8f59a539b1c9a82e56b492c4504ac5a55d9b9666/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b5cf72528f711ac4193d0a9a8f59a539b1c9a82e56b492c4504ac5a55d9b9666/hostname",
	        "HostsPath": "/var/lib/docker/containers/b5cf72528f711ac4193d0a9a8f59a539b1c9a82e56b492c4504ac5a55d9b9666/hosts",
	        "LogPath": "/var/lib/docker/containers/b5cf72528f711ac4193d0a9a8f59a539b1c9a82e56b492c4504ac5a55d9b9666/b5cf72528f711ac4193d0a9a8f59a539b1c9a82e56b492c4504ac5a55d9b9666-json.log",
	        "Name": "/old-k8s-version-378000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-378000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-378000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/92e903a37111c1be0b41a42b0b482279b759da84e66b3f0a99d79bad046a816b-init/diff:/var/lib/docker/overlay2/e56ac82b253363a3e2a8ef1d32b035837a0160e70c091e0204df14a88b273cb0/diff",
	                "MergedDir": "/var/lib/docker/overlay2/92e903a37111c1be0b41a42b0b482279b759da84e66b3f0a99d79bad046a816b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/92e903a37111c1be0b41a42b0b482279b759da84e66b3f0a99d79bad046a816b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/92e903a37111c1be0b41a42b0b482279b759da84e66b3f0a99d79bad046a816b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-378000",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-378000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-378000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-378000",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-378000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c95dcad38d994f0eefc35e3377946f33ace9c263664067236394916b531fb3c1",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59146"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59147"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59148"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59149"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59150"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/c95dcad38d99",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-378000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "b5cf72528f71",
	                        "old-k8s-version-378000"
	                    ],
	                    "NetworkID": "c3d985d4d6f8171a299a582295ee1a9b4b599d36307c61b13f7920634885fa85",
	                    "EndpointID": "c253e59524ffcac002d9239041f417095609bffd96e4ac16dab03e933a4af6a9",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-378000 -n old-k8s-version-378000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-378000 -n old-k8s-version-378000: exit status 6 (356.322078ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 13:41:18.488721   55001 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-378000" does not appear in /Users/jenkins/minikube-integration/16890-37879/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-378000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-378000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-378000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b5cf72528f711ac4193d0a9a8f59a539b1c9a82e56b492c4504ac5a55d9b9666",
	        "Created": "2023-07-17T20:37:05.574347632Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 718537,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-07-17T20:37:05.794684114Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6cc01e6091959400f260dc442708e7c71630b58dab1f7c344cb00926bd84950",
	        "ResolvConfPath": "/var/lib/docker/containers/b5cf72528f711ac4193d0a9a8f59a539b1c9a82e56b492c4504ac5a55d9b9666/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b5cf72528f711ac4193d0a9a8f59a539b1c9a82e56b492c4504ac5a55d9b9666/hostname",
	        "HostsPath": "/var/lib/docker/containers/b5cf72528f711ac4193d0a9a8f59a539b1c9a82e56b492c4504ac5a55d9b9666/hosts",
	        "LogPath": "/var/lib/docker/containers/b5cf72528f711ac4193d0a9a8f59a539b1c9a82e56b492c4504ac5a55d9b9666/b5cf72528f711ac4193d0a9a8f59a539b1c9a82e56b492c4504ac5a55d9b9666-json.log",
	        "Name": "/old-k8s-version-378000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-378000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-378000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/92e903a37111c1be0b41a42b0b482279b759da84e66b3f0a99d79bad046a816b-init/diff:/var/lib/docker/overlay2/e56ac82b253363a3e2a8ef1d32b035837a0160e70c091e0204df14a88b273cb0/diff",
	                "MergedDir": "/var/lib/docker/overlay2/92e903a37111c1be0b41a42b0b482279b759da84e66b3f0a99d79bad046a816b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/92e903a37111c1be0b41a42b0b482279b759da84e66b3f0a99d79bad046a816b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/92e903a37111c1be0b41a42b0b482279b759da84e66b3f0a99d79bad046a816b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-378000",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-378000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-378000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-378000",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-378000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c95dcad38d994f0eefc35e3377946f33ace9c263664067236394916b531fb3c1",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59146"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59147"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59148"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59149"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59150"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/c95dcad38d99",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-378000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "b5cf72528f71",
	                        "old-k8s-version-378000"
	                    ],
	                    "NetworkID": "c3d985d4d6f8171a299a582295ee1a9b4b599d36307c61b13f7920634885fa85",
	                    "EndpointID": "c253e59524ffcac002d9239041f417095609bffd96e4ac16dab03e933a4af6a9",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-378000 -n old-k8s-version-378000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-378000 -n old-k8s-version-378000: exit status 6 (359.765069ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 13:41:18.899856   55013 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-378000" does not appear in /Users/jenkins/minikube-integration/16890-37879/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-378000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.86s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (116.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p old-k8s-version-378000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0717 13:41:25.569441   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/bridge-859000/client.crt: no such file or directory
E0717 13:41:25.574555   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/bridge-859000/client.crt: no such file or directory
E0717 13:41:25.586273   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/bridge-859000/client.crt: no such file or directory
E0717 13:41:25.606984   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/bridge-859000/client.crt: no such file or directory
E0717 13:41:25.647134   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/bridge-859000/client.crt: no such file or directory
E0717 13:41:25.727556   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/bridge-859000/client.crt: no such file or directory
E0717 13:41:25.888008   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/bridge-859000/client.crt: no such file or directory
E0717 13:41:26.208602   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/bridge-859000/client.crt: no such file or directory
E0717 13:41:26.850904   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/bridge-859000/client.crt: no such file or directory
E0717 13:41:28.131177   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/bridge-859000/client.crt: no such file or directory
E0717 13:41:30.691390   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/bridge-859000/client.crt: no such file or directory
E0717 13:41:34.704477   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/enable-default-cni-859000/client.crt: no such file or directory
E0717 13:41:35.811826   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/bridge-859000/client.crt: no such file or directory
E0717 13:41:42.251058   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/custom-flannel-859000/client.crt: no such file or directory
E0717 13:41:44.657285   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/false-859000/client.crt: no such file or directory
E0717 13:41:46.052852   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/bridge-859000/client.crt: no such file or directory
E0717 13:41:49.671828   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/flannel-859000/client.crt: no such file or directory
E0717 13:41:56.293127   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/kubenet-859000/client.crt: no such file or directory
E0717 13:41:56.299512   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/kubenet-859000/client.crt: no such file or directory
E0717 13:41:56.309662   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/kubenet-859000/client.crt: no such file or directory
E0717 13:41:56.330071   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/kubenet-859000/client.crt: no such file or directory
E0717 13:41:56.370637   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/kubenet-859000/client.crt: no such file or directory
E0717 13:41:56.450766   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/kubenet-859000/client.crt: no such file or directory
E0717 13:41:56.611689   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/kubenet-859000/client.crt: no such file or directory
E0717 13:41:56.933707   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/kubenet-859000/client.crt: no such file or directory
E0717 13:41:57.576016   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/kubenet-859000/client.crt: no such file or directory
E0717 13:41:58.856143   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/kubenet-859000/client.crt: no such file or directory
E0717 13:42:01.417417   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/kubenet-859000/client.crt: no such file or directory
E0717 13:42:06.533276   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/bridge-859000/client.crt: no such file or directory
E0717 13:42:06.538110   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/kubenet-859000/client.crt: no such file or directory
E0717 13:42:16.639501   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/addons-702000/client.crt: no such file or directory
E0717 13:42:16.778550   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/kubenet-859000/client.crt: no such file or directory
E0717 13:42:25.357022   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/calico-859000/client.crt: no such file or directory
E0717 13:42:33.593942   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/addons-702000/client.crt: no such file or directory
E0717 13:42:36.455132   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/functional-625000/client.crt: no such file or directory
E0717 13:42:37.258736   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/kubenet-859000/client.crt: no such file or directory
E0717 13:42:47.495493   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/bridge-859000/client.crt: no such file or directory
E0717 13:42:53.043954   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/calico-859000/client.crt: no such file or directory
E0717 13:42:56.624638   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/enable-default-cni-859000/client.crt: no such file or directory
E0717 13:43:11.591996   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/flannel-859000/client.crt: no such file or directory
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons enable metrics-server -p old-k8s-version-378000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m55.753325083s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	unable to recognize "/etc/kubernetes/addons/metrics-apiservice.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-deployment.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-service.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	]
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-darwin-amd64 addons enable metrics-server -p old-k8s-version-378000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-378000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-378000 describe deploy/metrics-server -n kube-system: exit status 1 (34.834156ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-378000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-378000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-378000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-378000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b5cf72528f711ac4193d0a9a8f59a539b1c9a82e56b492c4504ac5a55d9b9666",
	        "Created": "2023-07-17T20:37:05.574347632Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 718537,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-07-17T20:37:05.794684114Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6cc01e6091959400f260dc442708e7c71630b58dab1f7c344cb00926bd84950",
	        "ResolvConfPath": "/var/lib/docker/containers/b5cf72528f711ac4193d0a9a8f59a539b1c9a82e56b492c4504ac5a55d9b9666/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b5cf72528f711ac4193d0a9a8f59a539b1c9a82e56b492c4504ac5a55d9b9666/hostname",
	        "HostsPath": "/var/lib/docker/containers/b5cf72528f711ac4193d0a9a8f59a539b1c9a82e56b492c4504ac5a55d9b9666/hosts",
	        "LogPath": "/var/lib/docker/containers/b5cf72528f711ac4193d0a9a8f59a539b1c9a82e56b492c4504ac5a55d9b9666/b5cf72528f711ac4193d0a9a8f59a539b1c9a82e56b492c4504ac5a55d9b9666-json.log",
	        "Name": "/old-k8s-version-378000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-378000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-378000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/92e903a37111c1be0b41a42b0b482279b759da84e66b3f0a99d79bad046a816b-init/diff:/var/lib/docker/overlay2/e56ac82b253363a3e2a8ef1d32b035837a0160e70c091e0204df14a88b273cb0/diff",
	                "MergedDir": "/var/lib/docker/overlay2/92e903a37111c1be0b41a42b0b482279b759da84e66b3f0a99d79bad046a816b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/92e903a37111c1be0b41a42b0b482279b759da84e66b3f0a99d79bad046a816b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/92e903a37111c1be0b41a42b0b482279b759da84e66b3f0a99d79bad046a816b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-378000",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-378000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-378000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-378000",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-378000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c95dcad38d994f0eefc35e3377946f33ace9c263664067236394916b531fb3c1",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59146"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59147"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59148"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59149"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59150"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/c95dcad38d99",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-378000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "b5cf72528f71",
	                        "old-k8s-version-378000"
	                    ],
	                    "NetworkID": "c3d985d4d6f8171a299a582295ee1a9b4b599d36307c61b13f7920634885fa85",
	                    "EndpointID": "c253e59524ffcac002d9239041f417095609bffd96e4ac16dab03e933a4af6a9",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-378000 -n old-k8s-version-378000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-378000 -n old-k8s-version-378000: exit status 6 (375.498358ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 13:43:15.115614   55060 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-378000" does not appear in /Users/jenkins/minikube-integration/16890-37879/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-378000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (116.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (507.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p old-k8s-version-378000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0
E0717 13:43:18.220109   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/kubenet-859000/client.crt: no such file or directory
E0717 13:43:58.404114   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/custom-flannel-859000/client.crt: no such file or directory
E0717 13:44:00.807696   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/false-859000/client.crt: no such file or directory
E0717 13:44:09.416314   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/bridge-859000/client.crt: no such file or directory
E0717 13:44:18.617775   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/auto-859000/client.crt: no such file or directory
E0717 13:44:26.090889   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/custom-flannel-859000/client.crt: no such file or directory
E0717 13:44:28.497477   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/false-859000/client.crt: no such file or directory
E0717 13:44:32.027192   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/skaffold-508000/client.crt: no such file or directory
E0717 13:44:40.140091   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/kubenet-859000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p old-k8s-version-378000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0: exit status 109 (8m24.656545897s)

                                                
                                                
-- stdout --
	* [old-k8s-version-378000] minikube v1.30.1 on Darwin 13.4.1
	  - MINIKUBE_LOCATION=16890
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16890-37879/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16890-37879/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.27.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.27.3
	* Using the docker driver based on existing profile
	* Starting control plane node old-k8s-version-378000 in cluster old-k8s-version-378000
	* Pulling base image ...
	* Restarting existing docker container for "old-k8s-version-378000" ...
	* Preparing Kubernetes v1.16.0 on Docker 24.0.4 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 13:43:17.101356   55100 out.go:296] Setting OutFile to fd 1 ...
	I0717 13:43:17.101504   55100 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 13:43:17.101510   55100 out.go:309] Setting ErrFile to fd 2...
	I0717 13:43:17.101514   55100 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 13:43:17.101695   55100 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/16890-37879/.minikube/bin
	I0717 13:43:17.103152   55100 out.go:303] Setting JSON to false
	I0717 13:43:17.122348   55100 start.go:128] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":16968,"bootTime":1689609629,"procs":400,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0717 13:43:17.122426   55100 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0717 13:43:17.144170   55100 out.go:177] * [old-k8s-version-378000] minikube v1.30.1 on Darwin 13.4.1
	I0717 13:43:17.186210   55100 notify.go:220] Checking for updates...
	I0717 13:43:17.207261   55100 out.go:177]   - MINIKUBE_LOCATION=16890
	I0717 13:43:17.249137   55100 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16890-37879/kubeconfig
	I0717 13:43:17.270292   55100 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0717 13:43:17.291065   55100 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 13:43:17.312394   55100 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16890-37879/.minikube
	I0717 13:43:17.333019   55100 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 13:43:17.354812   55100 config.go:182] Loaded profile config "old-k8s-version-378000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0717 13:43:17.376205   55100 out.go:177] * Kubernetes 1.27.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.27.3
	I0717 13:43:17.397184   55100 driver.go:373] Setting default libvirt URI to qemu:///system
	I0717 13:43:17.453177   55100 docker.go:121] docker version: linux-24.0.2:Docker Desktop 4.21.1 (114176)
	I0717 13:43:17.453327   55100 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 13:43:17.549829   55100 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:56 OomKillDisable:false NGoroutines:68 SystemTime:2023-07-17 20:43:17.538623873 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:5.15.49-linuxkit-pr OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexS
erverAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=built
in name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Pat
h:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.6] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Command line tool for Docker Scout Vendor:Docker Inc. Version:0.16.1]] Warnings:<n
il>}}
	I0717 13:43:17.591703   55100 out.go:177] * Using the docker driver based on existing profile
	I0717 13:43:17.612838   55100 start.go:298] selected driver: docker
	I0717 13:43:17.612864   55100 start.go:880] validating driver "docker" against &{Name:old-k8s-version-378000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-378000 Namespace:default APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mount
IP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 13:43:17.613006   55100 start.go:891] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 13:43:17.616940   55100 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 13:43:17.714801   55100 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:56 OomKillDisable:false NGoroutines:68 SystemTime:2023-07-17 20:43:17.704297031 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:5.15.49-linuxkit-pr OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexS
erverAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=built
in name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Pat
h:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.6] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Command line tool for Docker Scout Vendor:Docker Inc. Version:0.16.1]] Warnings:<n
il>}}
	I0717 13:43:17.715019   55100 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 13:43:17.715039   55100 cni.go:84] Creating CNI manager for ""
	I0717 13:43:17.715051   55100 cni.go:168] CNI unnecessary in this configuration, recommending no CNI
	I0717 13:43:17.715065   55100 start_flags.go:319] config:
	{Name:old-k8s-version-378000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-378000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocke
t: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 13:43:17.757895   55100 out.go:177] * Starting control plane node old-k8s-version-378000 in cluster old-k8s-version-378000
	I0717 13:43:17.779591   55100 cache.go:122] Beginning downloading kic base image for docker with docker
	I0717 13:43:17.800772   55100 out.go:177] * Pulling base image ...
	I0717 13:43:17.842614   55100 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0717 13:43:17.842612   55100 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon
	I0717 13:43:17.842744   55100 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16890-37879/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0717 13:43:17.842766   55100 cache.go:57] Caching tarball of preloaded images
	I0717 13:43:17.843518   55100 preload.go:174] Found /Users/jenkins/minikube-integration/16890-37879/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0717 13:43:17.843693   55100 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0717 13:43:17.844130   55100 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/old-k8s-version-378000/config.json ...
	I0717 13:43:17.893322   55100 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon, skipping pull
	I0717 13:43:17.893347   55100 cache.go:145] gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 exists in daemon, skipping load
	I0717 13:43:17.893365   55100 cache.go:195] Successfully downloaded all kic artifacts
	I0717 13:43:17.893407   55100 start.go:365] acquiring machines lock for old-k8s-version-378000: {Name:mk1fa5bdcb933442ff3b09d713656e27b57c768b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 13:43:17.893500   55100 start.go:369] acquired machines lock for "old-k8s-version-378000" in 74.101µs
	I0717 13:43:17.893526   55100 start.go:96] Skipping create...Using existing machine configuration
	I0717 13:43:17.893533   55100 fix.go:54] fixHost starting: 
	I0717 13:43:17.893751   55100 cli_runner.go:164] Run: docker container inspect old-k8s-version-378000 --format={{.State.Status}}
	I0717 13:43:17.943501   55100 fix.go:102] recreateIfNeeded on old-k8s-version-378000: state=Stopped err=<nil>
	W0717 13:43:17.943548   55100 fix.go:128] unexpected machine state, will restart: <nil>
	I0717 13:43:17.965253   55100 out.go:177] * Restarting existing docker container for "old-k8s-version-378000" ...
	I0717 13:43:18.006887   55100 cli_runner.go:164] Run: docker start old-k8s-version-378000
	I0717 13:43:18.250267   55100 cli_runner.go:164] Run: docker container inspect old-k8s-version-378000 --format={{.State.Status}}
	I0717 13:43:18.300497   55100 kic.go:426] container "old-k8s-version-378000" state is running.
	I0717 13:43:18.301081   55100 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-378000
	I0717 13:43:18.354056   55100 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/old-k8s-version-378000/config.json ...
	I0717 13:43:18.354486   55100 machine.go:88] provisioning docker machine ...
	I0717 13:43:18.354515   55100 ubuntu.go:169] provisioning hostname "old-k8s-version-378000"
	I0717 13:43:18.354611   55100 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-378000
	I0717 13:43:18.408824   55100 main.go:141] libmachine: Using SSH client type: native
	I0717 13:43:18.409233   55100 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140d2c0] 0x1410360 <nil>  [] 0s} 127.0.0.1 59374 <nil> <nil>}
	I0717 13:43:18.409250   55100 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-378000 && echo "old-k8s-version-378000" | sudo tee /etc/hostname
	I0717 13:43:18.410219   55100 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0717 13:43:21.551100   55100 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-378000
	
	I0717 13:43:21.551207   55100 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-378000
	I0717 13:43:21.601524   55100 main.go:141] libmachine: Using SSH client type: native
	I0717 13:43:21.601863   55100 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140d2c0] 0x1410360 <nil>  [] 0s} 127.0.0.1 59374 <nil> <nil>}
	I0717 13:43:21.601877   55100 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-378000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-378000/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-378000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 13:43:21.730488   55100 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 13:43:21.730517   55100 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/16890-37879/.minikube CaCertPath:/Users/jenkins/minikube-integration/16890-37879/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/16890-37879/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/16890-37879/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/16890-37879/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/16890-37879/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/16890-37879/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/16890-37879/.minikube}
	I0717 13:43:21.730546   55100 ubuntu.go:177] setting up certificates
	I0717 13:43:21.730554   55100 provision.go:83] configureAuth start
	I0717 13:43:21.730637   55100 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-378000
	I0717 13:43:21.779432   55100 provision.go:138] copyHostCerts
	I0717 13:43:21.779541   55100 exec_runner.go:144] found /Users/jenkins/minikube-integration/16890-37879/.minikube/cert.pem, removing ...
	I0717 13:43:21.779550   55100 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/16890-37879/.minikube/cert.pem
	I0717 13:43:21.779648   55100 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16890-37879/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/16890-37879/.minikube/cert.pem (1123 bytes)
	I0717 13:43:21.779884   55100 exec_runner.go:144] found /Users/jenkins/minikube-integration/16890-37879/.minikube/key.pem, removing ...
	I0717 13:43:21.779890   55100 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/16890-37879/.minikube/key.pem
	I0717 13:43:21.779957   55100 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16890-37879/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/16890-37879/.minikube/key.pem (1679 bytes)
	I0717 13:43:21.780120   55100 exec_runner.go:144] found /Users/jenkins/minikube-integration/16890-37879/.minikube/ca.pem, removing ...
	I0717 13:43:21.780126   55100 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/16890-37879/.minikube/ca.pem
	I0717 13:43:21.780190   55100 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16890-37879/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/16890-37879/.minikube/ca.pem (1078 bytes)
	I0717 13:43:21.780321   55100 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/16890-37879/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/16890-37879/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/16890-37879/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-378000 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube old-k8s-version-378000]
	I0717 13:43:21.966948   55100 provision.go:172] copyRemoteCerts
	I0717 13:43:21.967017   55100 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 13:43:21.967086   55100 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-378000
	I0717 13:43:22.048506   55100 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59374 SSHKeyPath:/Users/jenkins/minikube-integration/16890-37879/.minikube/machines/old-k8s-version-378000/id_rsa Username:docker}
	I0717 13:43:22.141357   55100 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16890-37879/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 13:43:22.163714   55100 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16890-37879/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0717 13:43:22.185324   55100 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16890-37879/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0717 13:43:22.206696   55100 provision.go:86] duration metric: configureAuth took 476.12462ms
	I0717 13:43:22.206711   55100 ubuntu.go:193] setting minikube options for container-runtime
	I0717 13:43:22.206879   55100 config.go:182] Loaded profile config "old-k8s-version-378000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0717 13:43:22.206940   55100 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-378000
	I0717 13:43:22.256767   55100 main.go:141] libmachine: Using SSH client type: native
	I0717 13:43:22.257124   55100 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140d2c0] 0x1410360 <nil>  [] 0s} 127.0.0.1 59374 <nil> <nil>}
	I0717 13:43:22.257135   55100 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0717 13:43:22.384412   55100 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0717 13:43:22.384426   55100 ubuntu.go:71] root file system type: overlay
	I0717 13:43:22.384528   55100 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0717 13:43:22.384628   55100 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-378000
	I0717 13:43:22.435279   55100 main.go:141] libmachine: Using SSH client type: native
	I0717 13:43:22.435628   55100 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140d2c0] 0x1410360 <nil>  [] 0s} 127.0.0.1 59374 <nil> <nil>}
	I0717 13:43:22.435681   55100 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0717 13:43:22.573078   55100 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0717 13:43:22.573190   55100 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-378000
	I0717 13:43:22.623774   55100 main.go:141] libmachine: Using SSH client type: native
	I0717 13:43:22.624145   55100 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140d2c0] 0x1410360 <nil>  [] 0s} 127.0.0.1 59374 <nil> <nil>}
	I0717 13:43:22.624160   55100 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0717 13:43:22.759413   55100 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 13:43:22.759428   55100 machine.go:91] provisioned docker machine in 4.404942091s
	I0717 13:43:22.759438   55100 start.go:300] post-start starting for "old-k8s-version-378000" (driver="docker")
	I0717 13:43:22.759449   55100 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 13:43:22.759515   55100 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 13:43:22.759567   55100 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-378000
	I0717 13:43:22.809522   55100 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59374 SSHKeyPath:/Users/jenkins/minikube-integration/16890-37879/.minikube/machines/old-k8s-version-378000/id_rsa Username:docker}
	I0717 13:43:22.903619   55100 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 13:43:22.907724   55100 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0717 13:43:22.907747   55100 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0717 13:43:22.907754   55100 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0717 13:43:22.907758   55100 info.go:137] Remote host: Ubuntu 22.04.2 LTS
	I0717 13:43:22.907766   55100 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16890-37879/.minikube/addons for local assets ...
	I0717 13:43:22.907849   55100 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16890-37879/.minikube/files for local assets ...
	I0717 13:43:22.908024   55100 filesync.go:149] local asset: /Users/jenkins/minikube-integration/16890-37879/.minikube/files/etc/ssl/certs/383252.pem -> 383252.pem in /etc/ssl/certs
	I0717 13:43:22.908225   55100 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 13:43:22.916934   55100 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16890-37879/.minikube/files/etc/ssl/certs/383252.pem --> /etc/ssl/certs/383252.pem (1708 bytes)
	I0717 13:43:22.937975   55100 start.go:303] post-start completed in 178.527792ms
	I0717 13:43:22.938067   55100 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 13:43:22.938127   55100 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-378000
	I0717 13:43:22.987612   55100 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59374 SSHKeyPath:/Users/jenkins/minikube-integration/16890-37879/.minikube/machines/old-k8s-version-378000/id_rsa Username:docker}
	I0717 13:43:23.077653   55100 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0717 13:43:23.082734   55100 fix.go:56] fixHost completed within 5.189207771s
	I0717 13:43:23.082747   55100 start.go:83] releasing machines lock for "old-k8s-version-378000", held for 5.189250506s
	I0717 13:43:23.082831   55100 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-378000
	I0717 13:43:23.132660   55100 ssh_runner.go:195] Run: cat /version.json
	I0717 13:43:23.132694   55100 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 13:43:23.132744   55100 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-378000
	I0717 13:43:23.132780   55100 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-378000
	I0717 13:43:23.184536   55100 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59374 SSHKeyPath:/Users/jenkins/minikube-integration/16890-37879/.minikube/machines/old-k8s-version-378000/id_rsa Username:docker}
	I0717 13:43:23.184588   55100 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59374 SSHKeyPath:/Users/jenkins/minikube-integration/16890-37879/.minikube/machines/old-k8s-version-378000/id_rsa Username:docker}
	W0717 13:43:23.372025   55100 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.31.0 -> Actual minikube version: v1.30.1
	! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.31.0 -> Actual minikube version: v1.30.1
	I0717 13:43:23.372155   55100 ssh_runner.go:195] Run: systemctl --version
	I0717 13:43:23.377459   55100 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 13:43:23.382590   55100 cni.go:215] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 13:43:23.382651   55100 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0717 13:43:23.391464   55100 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0717 13:43:23.400456   55100 cni.go:311] no active bridge cni configs found in "/etc/cni/net.d" - nothing to configure
	I0717 13:43:23.400481   55100 start.go:469] detecting cgroup driver to use...
	I0717 13:43:23.400494   55100 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0717 13:43:23.400599   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 13:43:23.416354   55100 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.1"|' /etc/containerd/config.toml"
	I0717 13:43:23.426440   55100 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0717 13:43:23.436710   55100 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0717 13:43:23.436769   55100 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0717 13:43:23.446462   55100 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0717 13:43:23.456373   55100 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0717 13:43:23.466288   55100 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0717 13:43:23.476064   55100 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 13:43:23.485425   55100 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0717 13:43:23.495851   55100 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 13:43:23.504496   55100 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 13:43:23.512955   55100 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 13:43:23.582777   55100 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0717 13:43:23.657391   55100 start.go:469] detecting cgroup driver to use...
	I0717 13:43:23.657409   55100 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0717 13:43:23.657472   55100 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0717 13:43:23.669709   55100 cruntime.go:276] skipping containerd shutdown because we are bound to it
	I0717 13:43:23.669776   55100 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0717 13:43:23.681703   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 13:43:23.698086   55100 ssh_runner.go:195] Run: which cri-dockerd
	I0717 13:43:23.708317   55100 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0717 13:43:23.723031   55100 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0717 13:43:23.740350   55100 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0717 13:43:23.832620   55100 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0717 13:43:23.930798   55100 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0717 13:43:23.930825   55100 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0717 13:43:23.947918   55100 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 13:43:24.018499   55100 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0717 13:43:24.257310   55100 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0717 13:43:24.282720   55100 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0717 13:43:24.352276   55100 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 24.0.4 ...
	I0717 13:43:24.352476   55100 cli_runner.go:164] Run: docker exec -t old-k8s-version-378000 dig +short host.docker.internal
	I0717 13:43:24.463650   55100 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I0717 13:43:24.463766   55100 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I0717 13:43:24.468639   55100 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 13:43:24.479568   55100 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-378000
	I0717 13:43:24.529055   55100 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0717 13:43:24.529139   55100 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0717 13:43:24.549772   55100 docker.go:636] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0717 13:43:24.549798   55100 docker.go:642] registry.k8s.io/kube-apiserver:v1.16.0 wasn't preloaded
	I0717 13:43:24.549873   55100 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0717 13:43:24.559029   55100 ssh_runner.go:195] Run: which lz4
	I0717 13:43:24.563297   55100 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0717 13:43:24.567425   55100 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 13:43:24.567454   55100 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16890-37879/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (369789069 bytes)
	I0717 13:43:29.445500   55100 docker.go:600] Took 4.882297 seconds to copy over tarball
	I0717 13:43:29.445570   55100 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 13:43:31.417276   55100 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (1.971687302s)
	I0717 13:43:31.417294   55100 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 13:43:31.467425   55100 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0717 13:43:31.476517   55100 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2499 bytes)
	I0717 13:43:31.493140   55100 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 13:43:31.571681   55100 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0717 13:43:32.190626   55100 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0717 13:43:32.212872   55100 docker.go:636] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0717 13:43:32.212886   55100 docker.go:642] registry.k8s.io/kube-apiserver:v1.16.0 wasn't preloaded
	I0717 13:43:32.212894   55100 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0717 13:43:32.220534   55100 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0717 13:43:32.220590   55100 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0717 13:43:32.220615   55100 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0717 13:43:32.220531   55100 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0717 13:43:32.220687   55100 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0717 13:43:32.220775   55100 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0717 13:43:32.220878   55100 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0717 13:43:32.221592   55100 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 13:43:32.225617   55100 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0717 13:43:32.226074   55100 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0717 13:43:32.226862   55100 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0717 13:43:32.227001   55100 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0717 13:43:32.227034   55100 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0717 13:43:32.227101   55100 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0717 13:43:32.228133   55100 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0717 13:43:32.228165   55100 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 13:43:33.358263   55100 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0717 13:43:33.377728   55100 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0717 13:43:33.377775   55100 docker.go:316] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0717 13:43:33.377839   55100 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0717 13:43:33.398871   55100 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/16890-37879/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0717 13:43:33.519351   55100 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0717 13:43:33.538980   55100 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0717 13:43:33.539006   55100 docker.go:316] Removing image: registry.k8s.io/coredns:1.6.2
	I0717 13:43:33.539061   55100 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.6.2
	I0717 13:43:33.558723   55100 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/16890-37879/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0717 13:43:33.714533   55100 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0717 13:43:33.735823   55100 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0717 13:43:33.735867   55100 docker.go:316] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0717 13:43:33.735926   55100 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0717 13:43:33.755872   55100 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0717 13:43:33.758197   55100 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/16890-37879/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0717 13:43:33.776497   55100 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0717 13:43:33.776529   55100 docker.go:316] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0717 13:43:33.776595   55100 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.16.0
	I0717 13:43:33.796227   55100 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/16890-37879/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0717 13:43:33.964961   55100 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0717 13:43:33.984724   55100 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0717 13:43:33.984747   55100 docker.go:316] Removing image: registry.k8s.io/pause:3.1
	I0717 13:43:33.984807   55100 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.1
	I0717 13:43:34.003998   55100 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/16890-37879/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0717 13:43:34.268806   55100 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0717 13:43:34.288365   55100 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0717 13:43:34.288396   55100 docker.go:316] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0717 13:43:34.288468   55100 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0717 13:43:34.309841   55100 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/16890-37879/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0717 13:43:34.550313   55100 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0717 13:43:34.571217   55100 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0717 13:43:34.571257   55100 docker.go:316] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0717 13:43:34.571327   55100 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.3.15-0
	I0717 13:43:34.590317   55100 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/16890-37879/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0717 13:43:35.277125   55100 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 13:43:35.298578   55100 cache_images.go:92] LoadImages completed in 3.085678707s
	W0717 13:43:35.298635   55100 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/16890-37879/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0: no such file or directory
	X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/16890-37879/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0: no such file or directory
	I0717 13:43:35.298717   55100 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0717 13:43:35.351735   55100 cni.go:84] Creating CNI manager for ""
	I0717 13:43:35.351752   55100 cni.go:168] CNI unnecessary in this configuration, recommending no CNI
	I0717 13:43:35.351771   55100 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0717 13:43:35.351793   55100 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-378000 NodeName:old-k8s-version-378000 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0717 13:43:35.351914   55100 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "old-k8s-version-378000"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-378000
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.67.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 13:43:35.351991   55100 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=old-k8s-version-378000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-378000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0717 13:43:35.352074   55100 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0717 13:43:35.361362   55100 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 13:43:35.361427   55100 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 13:43:35.370024   55100 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (348 bytes)
	I0717 13:43:35.386238   55100 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 13:43:35.402737   55100 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2174 bytes)
	I0717 13:43:35.420523   55100 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0717 13:43:35.425200   55100 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 13:43:35.436647   55100 certs.go:56] Setting up /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/old-k8s-version-378000 for IP: 192.168.67.2
	I0717 13:43:35.436665   55100 certs.go:190] acquiring lock for shared ca certs: {Name:mkcb761e9710dc67a00cbdee9d78e096db7e9bb9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 13:43:35.436837   55100 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/16890-37879/.minikube/ca.key
	I0717 13:43:35.436902   55100 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/16890-37879/.minikube/proxy-client-ca.key
	I0717 13:43:35.437006   55100 certs.go:315] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/old-k8s-version-378000/client.key
	I0717 13:43:35.437082   55100 certs.go:315] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/old-k8s-version-378000/apiserver.key.c7fa3a9e
	I0717 13:43:35.437147   55100 certs.go:315] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/old-k8s-version-378000/proxy-client.key
	I0717 13:43:35.437371   55100 certs.go:437] found cert: /Users/jenkins/minikube-integration/16890-37879/.minikube/certs/Users/jenkins/minikube-integration/16890-37879/.minikube/certs/38325.pem (1338 bytes)
	W0717 13:43:35.437417   55100 certs.go:433] ignoring /Users/jenkins/minikube-integration/16890-37879/.minikube/certs/Users/jenkins/minikube-integration/16890-37879/.minikube/certs/38325_empty.pem, impossibly tiny 0 bytes
	I0717 13:43:35.437432   55100 certs.go:437] found cert: /Users/jenkins/minikube-integration/16890-37879/.minikube/certs/Users/jenkins/minikube-integration/16890-37879/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 13:43:35.437467   55100 certs.go:437] found cert: /Users/jenkins/minikube-integration/16890-37879/.minikube/certs/Users/jenkins/minikube-integration/16890-37879/.minikube/certs/ca.pem (1078 bytes)
	I0717 13:43:35.437502   55100 certs.go:437] found cert: /Users/jenkins/minikube-integration/16890-37879/.minikube/certs/Users/jenkins/minikube-integration/16890-37879/.minikube/certs/cert.pem (1123 bytes)
	I0717 13:43:35.437533   55100 certs.go:437] found cert: /Users/jenkins/minikube-integration/16890-37879/.minikube/certs/Users/jenkins/minikube-integration/16890-37879/.minikube/certs/key.pem (1679 bytes)
	I0717 13:43:35.437606   55100 certs.go:437] found cert: /Users/jenkins/minikube-integration/16890-37879/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/16890-37879/.minikube/files/etc/ssl/certs/383252.pem (1708 bytes)
	I0717 13:43:35.438126   55100 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/old-k8s-version-378000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0717 13:43:35.460316   55100 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/old-k8s-version-378000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 13:43:35.481740   55100 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/old-k8s-version-378000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 13:43:35.503615   55100 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/old-k8s-version-378000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0717 13:43:35.525719   55100 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16890-37879/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 13:43:35.547507   55100 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16890-37879/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 13:43:35.569164   55100 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16890-37879/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 13:43:35.590819   55100 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16890-37879/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0717 13:43:35.612118   55100 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16890-37879/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 13:43:35.633806   55100 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16890-37879/.minikube/certs/38325.pem --> /usr/share/ca-certificates/38325.pem (1338 bytes)
	I0717 13:43:35.656284   55100 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16890-37879/.minikube/files/etc/ssl/certs/383252.pem --> /usr/share/ca-certificates/383252.pem (1708 bytes)
	I0717 13:43:35.677843   55100 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 13:43:35.694502   55100 ssh_runner.go:195] Run: openssl version
	I0717 13:43:35.701747   55100 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/38325.pem && ln -fs /usr/share/ca-certificates/38325.pem /etc/ssl/certs/38325.pem"
	I0717 13:43:35.713182   55100 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/38325.pem
	I0717 13:43:35.717934   55100 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jul 17 19:49 /usr/share/ca-certificates/38325.pem
	I0717 13:43:35.717983   55100 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/38325.pem
	I0717 13:43:35.725325   55100 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/38325.pem /etc/ssl/certs/51391683.0"
	I0717 13:43:35.734499   55100 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/383252.pem && ln -fs /usr/share/ca-certificates/383252.pem /etc/ssl/certs/383252.pem"
	I0717 13:43:35.744359   55100 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/383252.pem
	I0717 13:43:35.748756   55100 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jul 17 19:49 /usr/share/ca-certificates/383252.pem
	I0717 13:43:35.748799   55100 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/383252.pem
	I0717 13:43:35.755618   55100 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/383252.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 13:43:35.764679   55100 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 13:43:35.774304   55100 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 13:43:35.778624   55100 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jul 17 19:44 /usr/share/ca-certificates/minikubeCA.pem
	I0717 13:43:35.778668   55100 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 13:43:35.785416   55100 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 13:43:35.794521   55100 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0717 13:43:35.799323   55100 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 13:43:35.805939   55100 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 13:43:35.812852   55100 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 13:43:35.819508   55100 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 13:43:35.826308   55100 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 13:43:35.832980   55100 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 13:43:35.839754   55100 kubeadm.go:404] StartCluster: {Name:old-k8s-version-378000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-378000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs
:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26214
4 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 13:43:35.839867   55100 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0717 13:43:35.859012   55100 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 13:43:35.868392   55100 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0717 13:43:35.868404   55100 kubeadm.go:636] restartCluster start
	I0717 13:43:35.868461   55100 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 13:43:35.877275   55100 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 13:43:35.877354   55100 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-378000
	I0717 13:43:35.928230   55100 kubeconfig.go:135] verify returned: extract IP: "old-k8s-version-378000" does not appear in /Users/jenkins/minikube-integration/16890-37879/kubeconfig
	I0717 13:43:35.928398   55100 kubeconfig.go:146] "old-k8s-version-378000" context is missing from /Users/jenkins/minikube-integration/16890-37879/kubeconfig - will repair!
	I0717 13:43:35.928740   55100 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16890-37879/kubeconfig: {Name:mk0f5d923a936f4479f634933efc75403106a170 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 13:43:35.930279   55100 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 13:43:35.939947   55100 api_server.go:166] Checking apiserver status ...
	I0717 13:43:35.940004   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 13:43:35.950344   55100 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 13:43:36.451878   55100 api_server.go:166] Checking apiserver status ...
	I0717 13:43:36.452013   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 13:43:36.464286   55100 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 13:43:36.951699   55100 api_server.go:166] Checking apiserver status ...
	I0717 13:43:36.951843   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 13:43:36.963970   55100 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 13:43:37.451431   55100 api_server.go:166] Checking apiserver status ...
	I0717 13:43:37.451547   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 13:43:37.463749   55100 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 13:43:37.951875   55100 api_server.go:166] Checking apiserver status ...
	I0717 13:43:37.952048   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 13:43:37.964476   55100 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 13:43:38.450678   55100 api_server.go:166] Checking apiserver status ...
	I0717 13:43:38.450852   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 13:43:38.462734   55100 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 13:43:38.950921   55100 api_server.go:166] Checking apiserver status ...
	I0717 13:43:38.951071   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 13:43:38.963998   55100 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 13:43:39.450977   55100 api_server.go:166] Checking apiserver status ...
	I0717 13:43:39.451094   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 13:43:39.463131   55100 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 13:43:39.950936   55100 api_server.go:166] Checking apiserver status ...
	I0717 13:43:39.951082   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 13:43:39.963253   55100 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 13:43:40.451318   55100 api_server.go:166] Checking apiserver status ...
	I0717 13:43:40.451476   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 13:43:40.463755   55100 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 13:43:40.950380   55100 api_server.go:166] Checking apiserver status ...
	I0717 13:43:40.950431   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 13:43:40.962139   55100 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 13:43:41.451188   55100 api_server.go:166] Checking apiserver status ...
	I0717 13:43:41.451321   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 13:43:41.463632   55100 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 13:43:41.951493   55100 api_server.go:166] Checking apiserver status ...
	I0717 13:43:41.951650   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 13:43:41.962876   55100 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 13:43:42.450435   55100 api_server.go:166] Checking apiserver status ...
	I0717 13:43:42.450607   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 13:43:42.462773   55100 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 13:43:42.950959   55100 api_server.go:166] Checking apiserver status ...
	I0717 13:43:42.951101   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 13:43:42.963189   55100 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 13:43:43.452442   55100 api_server.go:166] Checking apiserver status ...
	I0717 13:43:43.452562   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 13:43:43.464618   55100 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 13:43:43.951861   55100 api_server.go:166] Checking apiserver status ...
	I0717 13:43:43.951988   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 13:43:43.964109   55100 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 13:43:44.450483   55100 api_server.go:166] Checking apiserver status ...
	I0717 13:43:44.450703   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 13:43:44.462952   55100 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 13:43:44.951317   55100 api_server.go:166] Checking apiserver status ...
	I0717 13:43:44.951466   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 13:43:44.963652   55100 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 13:43:45.451627   55100 api_server.go:166] Checking apiserver status ...
	I0717 13:43:45.451792   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 13:43:45.463893   55100 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 13:43:45.941622   55100 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0717 13:43:45.941664   55100 kubeadm.go:1128] stopping kube-system containers ...
	I0717 13:43:45.941851   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0717 13:43:45.964620   55100 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0717 13:43:45.976774   55100 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 13:43:45.985801   55100 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5695 Jul 17 20:39 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5727 Jul 17 20:39 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5791 Jul 17 20:39 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5679 Jul 17 20:39 /etc/kubernetes/scheduler.conf
	
	I0717 13:43:45.985869   55100 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 13:43:45.994922   55100 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 13:43:46.003978   55100 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 13:43:46.012802   55100 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 13:43:46.021771   55100 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 13:43:46.030708   55100 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0717 13:43:46.030719   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 13:43:46.085309   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 13:43:47.154462   55100 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.069135627s)
	I0717 13:43:47.154484   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0717 13:43:47.340758   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 13:43:47.398304   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0717 13:43:47.469493   55100 api_server.go:52] waiting for apiserver process to appear ...
	I0717 13:43:47.469558   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:43:47.979921   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:43:48.479861   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:43:48.981523   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:43:49.480258   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:43:49.980353   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:43:50.482037   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:43:50.980423   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:43:51.481983   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:43:51.980425   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:43:52.480834   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:43:52.980408   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:43:53.480636   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:43:53.981982   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:43:54.480057   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:43:54.980102   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:43:55.482059   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:43:55.980064   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:43:56.480468   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:43:56.981523   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:43:57.480542   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:43:57.981660   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:43:58.480824   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:43:58.981982   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:43:59.482067   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:43:59.981937   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:44:00.481980   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:44:00.980087   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:44:01.480509   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:44:01.981980   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:44:02.481949   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:44:02.981994   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:44:03.482007   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:44:03.979901   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:44:04.480097   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:44:04.981985   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:44:05.479964   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:44:05.981953   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:44:06.481816   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:44:06.980056   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:44:07.481959   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:44:07.980294   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:44:08.480607   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:44:08.980681   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:44:09.480678   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:44:09.980543   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:44:10.480128   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:44:10.980957   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:44:11.481925   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:44:11.982021   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:44:12.480397   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:44:12.979833   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:44:13.479884   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:44:13.980150   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:44:14.480249   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:44:14.980466   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:44:15.481215   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:44:15.979816   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:44:16.479933   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:44:16.979939   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:44:17.480290   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:44:17.979831   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:44:18.480091   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:44:18.980080   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:44:19.481338   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:44:19.981349   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:44:20.480259   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:44:20.980957   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:44:21.480631   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:44:21.981657   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:44:22.480319   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:44:22.980280   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:44:23.481286   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:44:23.981433   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:44:24.479938   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:44:24.979897   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:44:25.481010   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:44:25.979828   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:44:26.481231   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:44:26.981268   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:44:27.480394   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:44:27.979870   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:44:28.479987   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:44:28.981050   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:44:29.481028   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:44:29.981215   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:44:30.479957   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:44:30.981501   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:44:31.480610   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:44:31.980344   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:44:32.480717   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:44:32.980851   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:44:33.480180   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:44:33.980057   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:44:34.480641   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:44:34.980799   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:44:35.481530   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:44:35.981822   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:44:36.480816   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:44:36.980869   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:44:37.481371   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:44:37.980330   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:44:38.479895   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:44:38.980629   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:44:39.481070   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:44:39.980420   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:44:40.480249   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:44:40.980393   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:44:41.480246   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:44:41.980553   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:44:42.480745   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:44:42.979828   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:44:43.479883   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:44:43.980413   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:44:44.480843   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:44:44.979758   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:44:45.479790   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:44:45.981864   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:44:46.481925   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:44:46.980867   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:44:47.480665   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 13:44:47.499986   55100 logs.go:284] 0 containers: []
	W0717 13:44:47.500000   55100 logs.go:286] No container was found matching "kube-apiserver"
	I0717 13:44:47.500084   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 13:44:47.518762   55100 logs.go:284] 0 containers: []
	W0717 13:44:47.518775   55100 logs.go:286] No container was found matching "etcd"
	I0717 13:44:47.518856   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 13:44:47.537933   55100 logs.go:284] 0 containers: []
	W0717 13:44:47.537947   55100 logs.go:286] No container was found matching "coredns"
	I0717 13:44:47.538022   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 13:44:47.557474   55100 logs.go:284] 0 containers: []
	W0717 13:44:47.557488   55100 logs.go:286] No container was found matching "kube-scheduler"
	I0717 13:44:47.557560   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 13:44:47.577492   55100 logs.go:284] 0 containers: []
	W0717 13:44:47.577509   55100 logs.go:286] No container was found matching "kube-proxy"
	I0717 13:44:47.577589   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 13:44:47.597939   55100 logs.go:284] 0 containers: []
	W0717 13:44:47.597952   55100 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 13:44:47.598029   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 13:44:47.617228   55100 logs.go:284] 0 containers: []
	W0717 13:44:47.617242   55100 logs.go:286] No container was found matching "kindnet"
	I0717 13:44:47.617313   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 13:44:47.636915   55100 logs.go:284] 0 containers: []
	W0717 13:44:47.636932   55100 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 13:44:47.636942   55100 logs.go:123] Gathering logs for kubelet ...
	I0717 13:44:47.636958   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 13:44:47.679292   55100 logs.go:123] Gathering logs for dmesg ...
	I0717 13:44:47.679312   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 13:44:47.693666   55100 logs.go:123] Gathering logs for describe nodes ...
	I0717 13:44:47.693681   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 13:44:47.750971   55100 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 13:44:47.750993   55100 logs.go:123] Gathering logs for Docker ...
	I0717 13:44:47.751002   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 13:44:47.768000   55100 logs.go:123] Gathering logs for container status ...
	I0717 13:44:47.768017   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 13:44:50.333120   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:44:50.344506   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 13:44:50.362968   55100 logs.go:284] 0 containers: []
	W0717 13:44:50.362987   55100 logs.go:286] No container was found matching "kube-apiserver"
	I0717 13:44:50.363096   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 13:44:50.380818   55100 logs.go:284] 0 containers: []
	W0717 13:44:50.380831   55100 logs.go:286] No container was found matching "etcd"
	I0717 13:44:50.380911   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 13:44:50.400892   55100 logs.go:284] 0 containers: []
	W0717 13:44:50.400906   55100 logs.go:286] No container was found matching "coredns"
	I0717 13:44:50.400982   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 13:44:50.420666   55100 logs.go:284] 0 containers: []
	W0717 13:44:50.420680   55100 logs.go:286] No container was found matching "kube-scheduler"
	I0717 13:44:50.420760   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 13:44:50.440080   55100 logs.go:284] 0 containers: []
	W0717 13:44:50.440095   55100 logs.go:286] No container was found matching "kube-proxy"
	I0717 13:44:50.440177   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 13:44:50.460245   55100 logs.go:284] 0 containers: []
	W0717 13:44:50.460263   55100 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 13:44:50.460354   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 13:44:50.479712   55100 logs.go:284] 0 containers: []
	W0717 13:44:50.479731   55100 logs.go:286] No container was found matching "kindnet"
	I0717 13:44:50.479803   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 13:44:50.498959   55100 logs.go:284] 0 containers: []
	W0717 13:44:50.498971   55100 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 13:44:50.498979   55100 logs.go:123] Gathering logs for kubelet ...
	I0717 13:44:50.498986   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 13:44:50.543909   55100 logs.go:123] Gathering logs for dmesg ...
	I0717 13:44:50.543928   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 13:44:50.559851   55100 logs.go:123] Gathering logs for describe nodes ...
	I0717 13:44:50.559867   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 13:44:50.619790   55100 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 13:44:50.619809   55100 logs.go:123] Gathering logs for Docker ...
	I0717 13:44:50.619816   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 13:44:50.635898   55100 logs.go:123] Gathering logs for container status ...
	I0717 13:44:50.635930   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 13:44:53.191511   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:44:53.203762   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 13:44:53.223127   55100 logs.go:284] 0 containers: []
	W0717 13:44:53.223141   55100 logs.go:286] No container was found matching "kube-apiserver"
	I0717 13:44:53.223207   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 13:44:53.243121   55100 logs.go:284] 0 containers: []
	W0717 13:44:53.243135   55100 logs.go:286] No container was found matching "etcd"
	I0717 13:44:53.243212   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 13:44:53.262879   55100 logs.go:284] 0 containers: []
	W0717 13:44:53.262892   55100 logs.go:286] No container was found matching "coredns"
	I0717 13:44:53.262966   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 13:44:53.282811   55100 logs.go:284] 0 containers: []
	W0717 13:44:53.282824   55100 logs.go:286] No container was found matching "kube-scheduler"
	I0717 13:44:53.282893   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 13:44:53.302088   55100 logs.go:284] 0 containers: []
	W0717 13:44:53.302103   55100 logs.go:286] No container was found matching "kube-proxy"
	I0717 13:44:53.302182   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 13:44:53.323038   55100 logs.go:284] 0 containers: []
	W0717 13:44:53.323052   55100 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 13:44:53.323141   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 13:44:53.344977   55100 logs.go:284] 0 containers: []
	W0717 13:44:53.344991   55100 logs.go:286] No container was found matching "kindnet"
	I0717 13:44:53.345078   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 13:44:53.365575   55100 logs.go:284] 0 containers: []
	W0717 13:44:53.365588   55100 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 13:44:53.365595   55100 logs.go:123] Gathering logs for kubelet ...
	I0717 13:44:53.365602   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 13:44:53.405182   55100 logs.go:123] Gathering logs for dmesg ...
	I0717 13:44:53.405199   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 13:44:53.428130   55100 logs.go:123] Gathering logs for describe nodes ...
	I0717 13:44:53.428149   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 13:44:53.487753   55100 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 13:44:53.487765   55100 logs.go:123] Gathering logs for Docker ...
	I0717 13:44:53.487772   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 13:44:53.503346   55100 logs.go:123] Gathering logs for container status ...
	I0717 13:44:53.503359   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 13:44:56.056149   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:44:56.067237   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 13:44:56.085820   55100 logs.go:284] 0 containers: []
	W0717 13:44:56.085834   55100 logs.go:286] No container was found matching "kube-apiserver"
	I0717 13:44:56.085909   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 13:44:56.106083   55100 logs.go:284] 0 containers: []
	W0717 13:44:56.106104   55100 logs.go:286] No container was found matching "etcd"
	I0717 13:44:56.106179   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 13:44:56.125414   55100 logs.go:284] 0 containers: []
	W0717 13:44:56.125435   55100 logs.go:286] No container was found matching "coredns"
	I0717 13:44:56.125517   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 13:44:56.145622   55100 logs.go:284] 0 containers: []
	W0717 13:44:56.145636   55100 logs.go:286] No container was found matching "kube-scheduler"
	I0717 13:44:56.145714   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 13:44:56.164005   55100 logs.go:284] 0 containers: []
	W0717 13:44:56.164016   55100 logs.go:286] No container was found matching "kube-proxy"
	I0717 13:44:56.164108   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 13:44:56.183122   55100 logs.go:284] 0 containers: []
	W0717 13:44:56.183136   55100 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 13:44:56.183210   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 13:44:56.202872   55100 logs.go:284] 0 containers: []
	W0717 13:44:56.202886   55100 logs.go:286] No container was found matching "kindnet"
	I0717 13:44:56.202955   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 13:44:56.221510   55100 logs.go:284] 0 containers: []
	W0717 13:44:56.221524   55100 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 13:44:56.221541   55100 logs.go:123] Gathering logs for kubelet ...
	I0717 13:44:56.221551   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 13:44:56.263798   55100 logs.go:123] Gathering logs for dmesg ...
	I0717 13:44:56.263813   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 13:44:56.278905   55100 logs.go:123] Gathering logs for describe nodes ...
	I0717 13:44:56.278923   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 13:44:56.339178   55100 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 13:44:56.339198   55100 logs.go:123] Gathering logs for Docker ...
	I0717 13:44:56.339215   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 13:44:56.356562   55100 logs.go:123] Gathering logs for container status ...
	I0717 13:44:56.356577   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 13:44:58.924174   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:44:58.934866   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 13:44:58.955933   55100 logs.go:284] 0 containers: []
	W0717 13:44:58.955946   55100 logs.go:286] No container was found matching "kube-apiserver"
	I0717 13:44:58.956019   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 13:44:58.978484   55100 logs.go:284] 0 containers: []
	W0717 13:44:58.978504   55100 logs.go:286] No container was found matching "etcd"
	I0717 13:44:58.978571   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 13:44:58.998361   55100 logs.go:284] 0 containers: []
	W0717 13:44:58.998373   55100 logs.go:286] No container was found matching "coredns"
	I0717 13:44:58.998447   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 13:44:59.023154   55100 logs.go:284] 0 containers: []
	W0717 13:44:59.023171   55100 logs.go:286] No container was found matching "kube-scheduler"
	I0717 13:44:59.023256   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 13:44:59.044197   55100 logs.go:284] 0 containers: []
	W0717 13:44:59.044213   55100 logs.go:286] No container was found matching "kube-proxy"
	I0717 13:44:59.044316   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 13:44:59.065738   55100 logs.go:284] 0 containers: []
	W0717 13:44:59.065753   55100 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 13:44:59.065825   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 13:44:59.086989   55100 logs.go:284] 0 containers: []
	W0717 13:44:59.087003   55100 logs.go:286] No container was found matching "kindnet"
	I0717 13:44:59.087085   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 13:44:59.107558   55100 logs.go:284] 0 containers: []
	W0717 13:44:59.107572   55100 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 13:44:59.107579   55100 logs.go:123] Gathering logs for describe nodes ...
	I0717 13:44:59.107586   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 13:44:59.169724   55100 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 13:44:59.169736   55100 logs.go:123] Gathering logs for Docker ...
	I0717 13:44:59.169748   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 13:44:59.186054   55100 logs.go:123] Gathering logs for container status ...
	I0717 13:44:59.186078   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 13:44:59.246209   55100 logs.go:123] Gathering logs for kubelet ...
	I0717 13:44:59.246224   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 13:44:59.291010   55100 logs.go:123] Gathering logs for dmesg ...
	I0717 13:44:59.291029   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 13:45:01.806394   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:45:01.817131   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 13:45:01.836596   55100 logs.go:284] 0 containers: []
	W0717 13:45:01.836610   55100 logs.go:286] No container was found matching "kube-apiserver"
	I0717 13:45:01.836684   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 13:45:01.855896   55100 logs.go:284] 0 containers: []
	W0717 13:45:01.855911   55100 logs.go:286] No container was found matching "etcd"
	I0717 13:45:01.855981   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 13:45:01.875373   55100 logs.go:284] 0 containers: []
	W0717 13:45:01.875385   55100 logs.go:286] No container was found matching "coredns"
	I0717 13:45:01.875457   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 13:45:01.894827   55100 logs.go:284] 0 containers: []
	W0717 13:45:01.894840   55100 logs.go:286] No container was found matching "kube-scheduler"
	I0717 13:45:01.894912   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 13:45:01.914107   55100 logs.go:284] 0 containers: []
	W0717 13:45:01.914121   55100 logs.go:286] No container was found matching "kube-proxy"
	I0717 13:45:01.914194   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 13:45:01.933665   55100 logs.go:284] 0 containers: []
	W0717 13:45:01.933677   55100 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 13:45:01.933748   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 13:45:01.952584   55100 logs.go:284] 0 containers: []
	W0717 13:45:01.952598   55100 logs.go:286] No container was found matching "kindnet"
	I0717 13:45:01.952667   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 13:45:01.972076   55100 logs.go:284] 0 containers: []
	W0717 13:45:01.972099   55100 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 13:45:01.972112   55100 logs.go:123] Gathering logs for describe nodes ...
	I0717 13:45:01.972124   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 13:45:02.028151   55100 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 13:45:02.028162   55100 logs.go:123] Gathering logs for Docker ...
	I0717 13:45:02.028169   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 13:45:02.043343   55100 logs.go:123] Gathering logs for container status ...
	I0717 13:45:02.043359   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 13:45:02.097237   55100 logs.go:123] Gathering logs for kubelet ...
	I0717 13:45:02.097252   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 13:45:02.137796   55100 logs.go:123] Gathering logs for dmesg ...
	I0717 13:45:02.137814   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 13:45:04.652271   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:45:04.663355   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 13:45:04.683728   55100 logs.go:284] 0 containers: []
	W0717 13:45:04.683742   55100 logs.go:286] No container was found matching "kube-apiserver"
	I0717 13:45:04.683814   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 13:45:04.704389   55100 logs.go:284] 0 containers: []
	W0717 13:45:04.704401   55100 logs.go:286] No container was found matching "etcd"
	I0717 13:45:04.704468   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 13:45:04.723852   55100 logs.go:284] 0 containers: []
	W0717 13:45:04.723864   55100 logs.go:286] No container was found matching "coredns"
	I0717 13:45:04.723932   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 13:45:04.743818   55100 logs.go:284] 0 containers: []
	W0717 13:45:04.743835   55100 logs.go:286] No container was found matching "kube-scheduler"
	I0717 13:45:04.743929   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 13:45:04.762912   55100 logs.go:284] 0 containers: []
	W0717 13:45:04.762926   55100 logs.go:286] No container was found matching "kube-proxy"
	I0717 13:45:04.763002   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 13:45:04.784862   55100 logs.go:284] 0 containers: []
	W0717 13:45:04.784876   55100 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 13:45:04.784949   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 13:45:04.804454   55100 logs.go:284] 0 containers: []
	W0717 13:45:04.804468   55100 logs.go:286] No container was found matching "kindnet"
	I0717 13:45:04.804536   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 13:45:04.826166   55100 logs.go:284] 0 containers: []
	W0717 13:45:04.826178   55100 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 13:45:04.826185   55100 logs.go:123] Gathering logs for kubelet ...
	I0717 13:45:04.826197   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 13:45:04.867980   55100 logs.go:123] Gathering logs for dmesg ...
	I0717 13:45:04.867998   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 13:45:04.882665   55100 logs.go:123] Gathering logs for describe nodes ...
	I0717 13:45:04.882681   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 13:45:04.943596   55100 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 13:45:04.943608   55100 logs.go:123] Gathering logs for Docker ...
	I0717 13:45:04.943617   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 13:45:04.960886   55100 logs.go:123] Gathering logs for container status ...
	I0717 13:45:04.960903   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 13:45:07.514098   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:45:07.525050   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 13:45:07.545827   55100 logs.go:284] 0 containers: []
	W0717 13:45:07.545841   55100 logs.go:286] No container was found matching "kube-apiserver"
	I0717 13:45:07.545907   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 13:45:07.568605   55100 logs.go:284] 0 containers: []
	W0717 13:45:07.568622   55100 logs.go:286] No container was found matching "etcd"
	I0717 13:45:07.568707   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 13:45:07.592197   55100 logs.go:284] 0 containers: []
	W0717 13:45:07.592212   55100 logs.go:286] No container was found matching "coredns"
	I0717 13:45:07.592291   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 13:45:07.618974   55100 logs.go:284] 0 containers: []
	W0717 13:45:07.618989   55100 logs.go:286] No container was found matching "kube-scheduler"
	I0717 13:45:07.619074   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 13:45:07.639200   55100 logs.go:284] 0 containers: []
	W0717 13:45:07.639214   55100 logs.go:286] No container was found matching "kube-proxy"
	I0717 13:45:07.639288   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 13:45:07.660194   55100 logs.go:284] 0 containers: []
	W0717 13:45:07.660207   55100 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 13:45:07.660265   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 13:45:07.681909   55100 logs.go:284] 0 containers: []
	W0717 13:45:07.681923   55100 logs.go:286] No container was found matching "kindnet"
	I0717 13:45:07.682001   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 13:45:07.703709   55100 logs.go:284] 0 containers: []
	W0717 13:45:07.703723   55100 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 13:45:07.703729   55100 logs.go:123] Gathering logs for kubelet ...
	I0717 13:45:07.703736   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 13:45:07.747409   55100 logs.go:123] Gathering logs for dmesg ...
	I0717 13:45:07.747427   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 13:45:07.762720   55100 logs.go:123] Gathering logs for describe nodes ...
	I0717 13:45:07.762735   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 13:45:07.824877   55100 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 13:45:07.824896   55100 logs.go:123] Gathering logs for Docker ...
	I0717 13:45:07.824906   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 13:45:07.841265   55100 logs.go:123] Gathering logs for container status ...
	I0717 13:45:07.841281   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 13:45:10.394839   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:45:10.407428   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 13:45:10.427669   55100 logs.go:284] 0 containers: []
	W0717 13:45:10.427683   55100 logs.go:286] No container was found matching "kube-apiserver"
	I0717 13:45:10.427755   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 13:45:10.446765   55100 logs.go:284] 0 containers: []
	W0717 13:45:10.446778   55100 logs.go:286] No container was found matching "etcd"
	I0717 13:45:10.446842   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 13:45:10.468187   55100 logs.go:284] 0 containers: []
	W0717 13:45:10.468200   55100 logs.go:286] No container was found matching "coredns"
	I0717 13:45:10.468275   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 13:45:10.488076   55100 logs.go:284] 0 containers: []
	W0717 13:45:10.488091   55100 logs.go:286] No container was found matching "kube-scheduler"
	I0717 13:45:10.488156   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 13:45:10.508229   55100 logs.go:284] 0 containers: []
	W0717 13:45:10.508241   55100 logs.go:286] No container was found matching "kube-proxy"
	I0717 13:45:10.508307   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 13:45:10.528812   55100 logs.go:284] 0 containers: []
	W0717 13:45:10.528826   55100 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 13:45:10.528899   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 13:45:10.548801   55100 logs.go:284] 0 containers: []
	W0717 13:45:10.548814   55100 logs.go:286] No container was found matching "kindnet"
	I0717 13:45:10.548871   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 13:45:10.570781   55100 logs.go:284] 0 containers: []
	W0717 13:45:10.570793   55100 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 13:45:10.570800   55100 logs.go:123] Gathering logs for kubelet ...
	I0717 13:45:10.570807   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 13:45:10.610200   55100 logs.go:123] Gathering logs for dmesg ...
	I0717 13:45:10.610215   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 13:45:10.625749   55100 logs.go:123] Gathering logs for describe nodes ...
	I0717 13:45:10.625764   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 13:45:10.687341   55100 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 13:45:10.687354   55100 logs.go:123] Gathering logs for Docker ...
	I0717 13:45:10.687361   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 13:45:10.704025   55100 logs.go:123] Gathering logs for container status ...
	I0717 13:45:10.704041   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 13:45:13.262240   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:45:13.273562   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 13:45:13.293114   55100 logs.go:284] 0 containers: []
	W0717 13:45:13.293126   55100 logs.go:286] No container was found matching "kube-apiserver"
	I0717 13:45:13.293198   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 13:45:13.313965   55100 logs.go:284] 0 containers: []
	W0717 13:45:13.313978   55100 logs.go:286] No container was found matching "etcd"
	I0717 13:45:13.314065   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 13:45:13.334743   55100 logs.go:284] 0 containers: []
	W0717 13:45:13.334756   55100 logs.go:286] No container was found matching "coredns"
	I0717 13:45:13.334837   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 13:45:13.354578   55100 logs.go:284] 0 containers: []
	W0717 13:45:13.354592   55100 logs.go:286] No container was found matching "kube-scheduler"
	I0717 13:45:13.354666   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 13:45:13.373694   55100 logs.go:284] 0 containers: []
	W0717 13:45:13.373710   55100 logs.go:286] No container was found matching "kube-proxy"
	I0717 13:45:13.373781   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 13:45:13.395182   55100 logs.go:284] 0 containers: []
	W0717 13:45:13.395195   55100 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 13:45:13.395260   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 13:45:13.416516   55100 logs.go:284] 0 containers: []
	W0717 13:45:13.416529   55100 logs.go:286] No container was found matching "kindnet"
	I0717 13:45:13.416603   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 13:45:13.437538   55100 logs.go:284] 0 containers: []
	W0717 13:45:13.437554   55100 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 13:45:13.437564   55100 logs.go:123] Gathering logs for Docker ...
	I0717 13:45:13.437574   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 13:45:13.454139   55100 logs.go:123] Gathering logs for container status ...
	I0717 13:45:13.454155   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 13:45:13.505918   55100 logs.go:123] Gathering logs for kubelet ...
	I0717 13:45:13.505932   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 13:45:13.547272   55100 logs.go:123] Gathering logs for dmesg ...
	I0717 13:45:13.547290   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 13:45:13.561900   55100 logs.go:123] Gathering logs for describe nodes ...
	I0717 13:45:13.561916   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 13:45:13.623901   55100 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 13:45:16.124059   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:45:16.139309   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 13:45:16.163729   55100 logs.go:284] 0 containers: []
	W0717 13:45:16.163751   55100 logs.go:286] No container was found matching "kube-apiserver"
	I0717 13:45:16.163857   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 13:45:16.200446   55100 logs.go:284] 0 containers: []
	W0717 13:45:16.200459   55100 logs.go:286] No container was found matching "etcd"
	I0717 13:45:16.200535   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 13:45:16.230266   55100 logs.go:284] 0 containers: []
	W0717 13:45:16.230286   55100 logs.go:286] No container was found matching "coredns"
	I0717 13:45:16.230417   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 13:45:16.256862   55100 logs.go:284] 0 containers: []
	W0717 13:45:16.256891   55100 logs.go:286] No container was found matching "kube-scheduler"
	I0717 13:45:16.257017   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 13:45:16.285705   55100 logs.go:284] 0 containers: []
	W0717 13:45:16.285719   55100 logs.go:286] No container was found matching "kube-proxy"
	I0717 13:45:16.285788   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 13:45:16.306428   55100 logs.go:284] 0 containers: []
	W0717 13:45:16.306443   55100 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 13:45:16.306524   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 13:45:16.337995   55100 logs.go:284] 0 containers: []
	W0717 13:45:16.338013   55100 logs.go:286] No container was found matching "kindnet"
	I0717 13:45:16.338107   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 13:45:16.362231   55100 logs.go:284] 0 containers: []
	W0717 13:45:16.362249   55100 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 13:45:16.362258   55100 logs.go:123] Gathering logs for container status ...
	I0717 13:45:16.362267   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 13:45:16.435874   55100 logs.go:123] Gathering logs for kubelet ...
	I0717 13:45:16.435893   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 13:45:16.480728   55100 logs.go:123] Gathering logs for dmesg ...
	I0717 13:45:16.480749   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 13:45:16.496741   55100 logs.go:123] Gathering logs for describe nodes ...
	I0717 13:45:16.496757   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 13:45:16.567681   55100 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 13:45:16.567702   55100 logs.go:123] Gathering logs for Docker ...
	I0717 13:45:16.567713   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 13:45:19.089417   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:45:19.100391   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 13:45:19.119135   55100 logs.go:284] 0 containers: []
	W0717 13:45:19.119149   55100 logs.go:286] No container was found matching "kube-apiserver"
	I0717 13:45:19.119227   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 13:45:19.138200   55100 logs.go:284] 0 containers: []
	W0717 13:45:19.138213   55100 logs.go:286] No container was found matching "etcd"
	I0717 13:45:19.138278   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 13:45:19.157029   55100 logs.go:284] 0 containers: []
	W0717 13:45:19.157047   55100 logs.go:286] No container was found matching "coredns"
	I0717 13:45:19.157136   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 13:45:19.175481   55100 logs.go:284] 0 containers: []
	W0717 13:45:19.175492   55100 logs.go:286] No container was found matching "kube-scheduler"
	I0717 13:45:19.175556   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 13:45:19.194618   55100 logs.go:284] 0 containers: []
	W0717 13:45:19.194633   55100 logs.go:286] No container was found matching "kube-proxy"
	I0717 13:45:19.194705   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 13:45:19.214753   55100 logs.go:284] 0 containers: []
	W0717 13:45:19.214767   55100 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 13:45:19.214844   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 13:45:19.234137   55100 logs.go:284] 0 containers: []
	W0717 13:45:19.234151   55100 logs.go:286] No container was found matching "kindnet"
	I0717 13:45:19.234217   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 13:45:19.253020   55100 logs.go:284] 0 containers: []
	W0717 13:45:19.253039   55100 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 13:45:19.253047   55100 logs.go:123] Gathering logs for dmesg ...
	I0717 13:45:19.253055   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 13:45:19.266563   55100 logs.go:123] Gathering logs for describe nodes ...
	I0717 13:45:19.266577   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 13:45:19.325735   55100 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 13:45:19.325746   55100 logs.go:123] Gathering logs for Docker ...
	I0717 13:45:19.325753   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 13:45:19.342555   55100 logs.go:123] Gathering logs for container status ...
	I0717 13:45:19.342577   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 13:45:19.399309   55100 logs.go:123] Gathering logs for kubelet ...
	I0717 13:45:19.399324   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 13:45:21.938821   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:45:21.950135   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 13:45:21.968587   55100 logs.go:284] 0 containers: []
	W0717 13:45:21.968601   55100 logs.go:286] No container was found matching "kube-apiserver"
	I0717 13:45:21.968672   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 13:45:21.987461   55100 logs.go:284] 0 containers: []
	W0717 13:45:21.987475   55100 logs.go:286] No container was found matching "etcd"
	I0717 13:45:21.987542   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 13:45:22.007132   55100 logs.go:284] 0 containers: []
	W0717 13:45:22.007146   55100 logs.go:286] No container was found matching "coredns"
	I0717 13:45:22.007216   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 13:45:22.027661   55100 logs.go:284] 0 containers: []
	W0717 13:45:22.027677   55100 logs.go:286] No container was found matching "kube-scheduler"
	I0717 13:45:22.027754   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 13:45:22.048239   55100 logs.go:284] 0 containers: []
	W0717 13:45:22.048252   55100 logs.go:286] No container was found matching "kube-proxy"
	I0717 13:45:22.048327   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 13:45:22.070707   55100 logs.go:284] 0 containers: []
	W0717 13:45:22.070721   55100 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 13:45:22.070809   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 13:45:22.091851   55100 logs.go:284] 0 containers: []
	W0717 13:45:22.091872   55100 logs.go:286] No container was found matching "kindnet"
	I0717 13:45:22.091965   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 13:45:22.114201   55100 logs.go:284] 0 containers: []
	W0717 13:45:22.122516   55100 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 13:45:22.122535   55100 logs.go:123] Gathering logs for kubelet ...
	I0717 13:45:22.122544   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 13:45:22.171814   55100 logs.go:123] Gathering logs for dmesg ...
	I0717 13:45:22.171836   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 13:45:22.188721   55100 logs.go:123] Gathering logs for describe nodes ...
	I0717 13:45:22.188741   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 13:45:22.248894   55100 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 13:45:22.248906   55100 logs.go:123] Gathering logs for Docker ...
	I0717 13:45:22.248916   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 13:45:22.264891   55100 logs.go:123] Gathering logs for container status ...
	I0717 13:45:22.264905   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 13:45:24.819886   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:45:24.833869   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 13:45:24.855547   55100 logs.go:284] 0 containers: []
	W0717 13:45:24.855564   55100 logs.go:286] No container was found matching "kube-apiserver"
	I0717 13:45:24.855647   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 13:45:24.879723   55100 logs.go:284] 0 containers: []
	W0717 13:45:24.879740   55100 logs.go:286] No container was found matching "etcd"
	I0717 13:45:24.879820   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 13:45:24.904506   55100 logs.go:284] 0 containers: []
	W0717 13:45:24.904519   55100 logs.go:286] No container was found matching "coredns"
	I0717 13:45:24.904597   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 13:45:24.927025   55100 logs.go:284] 0 containers: []
	W0717 13:45:24.927045   55100 logs.go:286] No container was found matching "kube-scheduler"
	I0717 13:45:24.927162   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 13:45:24.954922   55100 logs.go:284] 0 containers: []
	W0717 13:45:24.954944   55100 logs.go:286] No container was found matching "kube-proxy"
	I0717 13:45:24.955068   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 13:45:24.976344   55100 logs.go:284] 0 containers: []
	W0717 13:45:24.976357   55100 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 13:45:24.976427   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 13:45:24.995143   55100 logs.go:284] 0 containers: []
	W0717 13:45:24.995158   55100 logs.go:286] No container was found matching "kindnet"
	I0717 13:45:24.995224   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 13:45:25.015581   55100 logs.go:284] 0 containers: []
	W0717 13:45:25.015593   55100 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 13:45:25.015599   55100 logs.go:123] Gathering logs for Docker ...
	I0717 13:45:25.015609   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 13:45:25.037073   55100 logs.go:123] Gathering logs for container status ...
	I0717 13:45:25.037091   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 13:45:25.100806   55100 logs.go:123] Gathering logs for kubelet ...
	I0717 13:45:25.100820   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 13:45:25.149835   55100 logs.go:123] Gathering logs for dmesg ...
	I0717 13:45:25.149858   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 13:45:25.166822   55100 logs.go:123] Gathering logs for describe nodes ...
	I0717 13:45:25.166838   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 13:45:25.226166   55100 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 13:45:27.726426   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:45:27.739138   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 13:45:27.758651   55100 logs.go:284] 0 containers: []
	W0717 13:45:27.758663   55100 logs.go:286] No container was found matching "kube-apiserver"
	I0717 13:45:27.758728   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 13:45:27.777100   55100 logs.go:284] 0 containers: []
	W0717 13:45:27.777114   55100 logs.go:286] No container was found matching "etcd"
	I0717 13:45:27.777189   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 13:45:27.795786   55100 logs.go:284] 0 containers: []
	W0717 13:45:27.795799   55100 logs.go:286] No container was found matching "coredns"
	I0717 13:45:27.795877   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 13:45:27.815175   55100 logs.go:284] 0 containers: []
	W0717 13:45:27.815188   55100 logs.go:286] No container was found matching "kube-scheduler"
	I0717 13:45:27.815264   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 13:45:27.834794   55100 logs.go:284] 0 containers: []
	W0717 13:45:27.834807   55100 logs.go:286] No container was found matching "kube-proxy"
	I0717 13:45:27.834894   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 13:45:27.853848   55100 logs.go:284] 0 containers: []
	W0717 13:45:27.853861   55100 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 13:45:27.853930   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 13:45:27.873761   55100 logs.go:284] 0 containers: []
	W0717 13:45:27.873776   55100 logs.go:286] No container was found matching "kindnet"
	I0717 13:45:27.873856   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 13:45:27.927777   55100 logs.go:284] 0 containers: []
	W0717 13:45:27.927791   55100 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 13:45:27.927803   55100 logs.go:123] Gathering logs for Docker ...
	I0717 13:45:27.927809   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 13:45:27.943407   55100 logs.go:123] Gathering logs for container status ...
	I0717 13:45:27.943422   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 13:45:27.993168   55100 logs.go:123] Gathering logs for kubelet ...
	I0717 13:45:27.993183   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 13:45:28.031290   55100 logs.go:123] Gathering logs for dmesg ...
	I0717 13:45:28.031307   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 13:45:28.044812   55100 logs.go:123] Gathering logs for describe nodes ...
	I0717 13:45:28.044831   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 13:45:28.101731   55100 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 13:45:30.602404   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:45:30.614856   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 13:45:30.635582   55100 logs.go:284] 0 containers: []
	W0717 13:45:30.635595   55100 logs.go:286] No container was found matching "kube-apiserver"
	I0717 13:45:30.635686   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 13:45:30.655965   55100 logs.go:284] 0 containers: []
	W0717 13:45:30.655979   55100 logs.go:286] No container was found matching "etcd"
	I0717 13:45:30.656054   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 13:45:30.675516   55100 logs.go:284] 0 containers: []
	W0717 13:45:30.675527   55100 logs.go:286] No container was found matching "coredns"
	I0717 13:45:30.675593   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 13:45:30.694636   55100 logs.go:284] 0 containers: []
	W0717 13:45:30.694649   55100 logs.go:286] No container was found matching "kube-scheduler"
	I0717 13:45:30.694723   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 13:45:30.717847   55100 logs.go:284] 0 containers: []
	W0717 13:45:30.717867   55100 logs.go:286] No container was found matching "kube-proxy"
	I0717 13:45:30.717945   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 13:45:30.739728   55100 logs.go:284] 0 containers: []
	W0717 13:45:30.739743   55100 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 13:45:30.739812   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 13:45:30.758701   55100 logs.go:284] 0 containers: []
	W0717 13:45:30.758715   55100 logs.go:286] No container was found matching "kindnet"
	I0717 13:45:30.758789   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 13:45:30.780762   55100 logs.go:284] 0 containers: []
	W0717 13:45:30.780778   55100 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 13:45:30.780787   55100 logs.go:123] Gathering logs for kubelet ...
	I0717 13:45:30.780795   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 13:45:30.822969   55100 logs.go:123] Gathering logs for dmesg ...
	I0717 13:45:30.822992   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 13:45:30.837830   55100 logs.go:123] Gathering logs for describe nodes ...
	I0717 13:45:30.837848   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 13:45:30.925486   55100 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 13:45:30.925504   55100 logs.go:123] Gathering logs for Docker ...
	I0717 13:45:30.925511   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 13:45:30.943119   55100 logs.go:123] Gathering logs for container status ...
	I0717 13:45:30.943133   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 13:45:33.505255   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:45:33.517345   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 13:45:33.539761   55100 logs.go:284] 0 containers: []
	W0717 13:45:33.539775   55100 logs.go:286] No container was found matching "kube-apiserver"
	I0717 13:45:33.539843   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 13:45:33.558603   55100 logs.go:284] 0 containers: []
	W0717 13:45:33.558616   55100 logs.go:286] No container was found matching "etcd"
	I0717 13:45:33.558687   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 13:45:33.578493   55100 logs.go:284] 0 containers: []
	W0717 13:45:33.578507   55100 logs.go:286] No container was found matching "coredns"
	I0717 13:45:33.578586   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 13:45:33.596970   55100 logs.go:284] 0 containers: []
	W0717 13:45:33.596984   55100 logs.go:286] No container was found matching "kube-scheduler"
	I0717 13:45:33.597063   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 13:45:33.615706   55100 logs.go:284] 0 containers: []
	W0717 13:45:33.615719   55100 logs.go:286] No container was found matching "kube-proxy"
	I0717 13:45:33.615790   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 13:45:33.635693   55100 logs.go:284] 0 containers: []
	W0717 13:45:33.635707   55100 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 13:45:33.635781   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 13:45:33.654366   55100 logs.go:284] 0 containers: []
	W0717 13:45:33.654379   55100 logs.go:286] No container was found matching "kindnet"
	I0717 13:45:33.654447   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 13:45:33.672740   55100 logs.go:284] 0 containers: []
	W0717 13:45:33.672754   55100 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 13:45:33.672761   55100 logs.go:123] Gathering logs for kubelet ...
	I0717 13:45:33.672768   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 13:45:33.713576   55100 logs.go:123] Gathering logs for dmesg ...
	I0717 13:45:33.713590   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 13:45:33.727471   55100 logs.go:123] Gathering logs for describe nodes ...
	I0717 13:45:33.727484   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 13:45:33.782798   55100 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 13:45:33.782811   55100 logs.go:123] Gathering logs for Docker ...
	I0717 13:45:33.782820   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 13:45:33.797924   55100 logs.go:123] Gathering logs for container status ...
	I0717 13:45:33.797936   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 13:45:36.351440   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:45:36.363528   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 13:45:36.383128   55100 logs.go:284] 0 containers: []
	W0717 13:45:36.383143   55100 logs.go:286] No container was found matching "kube-apiserver"
	I0717 13:45:36.383210   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 13:45:36.402304   55100 logs.go:284] 0 containers: []
	W0717 13:45:36.402318   55100 logs.go:286] No container was found matching "etcd"
	I0717 13:45:36.402387   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 13:45:36.421937   55100 logs.go:284] 0 containers: []
	W0717 13:45:36.421949   55100 logs.go:286] No container was found matching "coredns"
	I0717 13:45:36.422022   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 13:45:36.440955   55100 logs.go:284] 0 containers: []
	W0717 13:45:36.440967   55100 logs.go:286] No container was found matching "kube-scheduler"
	I0717 13:45:36.441054   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 13:45:36.461298   55100 logs.go:284] 0 containers: []
	W0717 13:45:36.461309   55100 logs.go:286] No container was found matching "kube-proxy"
	I0717 13:45:36.461374   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 13:45:36.479762   55100 logs.go:284] 0 containers: []
	W0717 13:45:36.479777   55100 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 13:45:36.479848   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 13:45:36.500058   55100 logs.go:284] 0 containers: []
	W0717 13:45:36.500077   55100 logs.go:286] No container was found matching "kindnet"
	I0717 13:45:36.500146   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 13:45:36.519613   55100 logs.go:284] 0 containers: []
	W0717 13:45:36.519627   55100 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 13:45:36.519634   55100 logs.go:123] Gathering logs for container status ...
	I0717 13:45:36.519649   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 13:45:36.571785   55100 logs.go:123] Gathering logs for kubelet ...
	I0717 13:45:36.571800   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 13:45:36.613064   55100 logs.go:123] Gathering logs for dmesg ...
	I0717 13:45:36.613079   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 13:45:36.627350   55100 logs.go:123] Gathering logs for describe nodes ...
	I0717 13:45:36.627363   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 13:45:36.682592   55100 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 13:45:36.682604   55100 logs.go:123] Gathering logs for Docker ...
	I0717 13:45:36.682611   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 13:45:39.198095   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:45:39.209608   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 13:45:39.228690   55100 logs.go:284] 0 containers: []
	W0717 13:45:39.228704   55100 logs.go:286] No container was found matching "kube-apiserver"
	I0717 13:45:39.228774   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 13:45:39.248087   55100 logs.go:284] 0 containers: []
	W0717 13:45:39.248100   55100 logs.go:286] No container was found matching "etcd"
	I0717 13:45:39.248167   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 13:45:39.267187   55100 logs.go:284] 0 containers: []
	W0717 13:45:39.267203   55100 logs.go:286] No container was found matching "coredns"
	I0717 13:45:39.267276   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 13:45:39.287755   55100 logs.go:284] 0 containers: []
	W0717 13:45:39.287767   55100 logs.go:286] No container was found matching "kube-scheduler"
	I0717 13:45:39.287831   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 13:45:39.306652   55100 logs.go:284] 0 containers: []
	W0717 13:45:39.306666   55100 logs.go:286] No container was found matching "kube-proxy"
	I0717 13:45:39.306733   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 13:45:39.326043   55100 logs.go:284] 0 containers: []
	W0717 13:45:39.326056   55100 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 13:45:39.326126   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 13:45:39.345713   55100 logs.go:284] 0 containers: []
	W0717 13:45:39.345728   55100 logs.go:286] No container was found matching "kindnet"
	I0717 13:45:39.345794   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 13:45:39.364612   55100 logs.go:284] 0 containers: []
	W0717 13:45:39.364625   55100 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 13:45:39.364632   55100 logs.go:123] Gathering logs for container status ...
	I0717 13:45:39.364639   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 13:45:39.416223   55100 logs.go:123] Gathering logs for kubelet ...
	I0717 13:45:39.416237   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 13:45:39.455139   55100 logs.go:123] Gathering logs for dmesg ...
	I0717 13:45:39.455158   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 13:45:39.469384   55100 logs.go:123] Gathering logs for describe nodes ...
	I0717 13:45:39.469399   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 13:45:39.525679   55100 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 13:45:39.525691   55100 logs.go:123] Gathering logs for Docker ...
	I0717 13:45:39.525698   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 13:45:42.043140   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:45:42.055506   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 13:45:42.075976   55100 logs.go:284] 0 containers: []
	W0717 13:45:42.075994   55100 logs.go:286] No container was found matching "kube-apiserver"
	I0717 13:45:42.076068   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 13:45:42.096788   55100 logs.go:284] 0 containers: []
	W0717 13:45:42.096802   55100 logs.go:286] No container was found matching "etcd"
	I0717 13:45:42.096870   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 13:45:42.118420   55100 logs.go:284] 0 containers: []
	W0717 13:45:42.122145   55100 logs.go:286] No container was found matching "coredns"
	I0717 13:45:42.122215   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 13:45:42.141867   55100 logs.go:284] 0 containers: []
	W0717 13:45:42.141881   55100 logs.go:286] No container was found matching "kube-scheduler"
	I0717 13:45:42.141949   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 13:45:42.162750   55100 logs.go:284] 0 containers: []
	W0717 13:45:42.162763   55100 logs.go:286] No container was found matching "kube-proxy"
	I0717 13:45:42.162828   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 13:45:42.182166   55100 logs.go:284] 0 containers: []
	W0717 13:45:42.182181   55100 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 13:45:42.182246   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 13:45:42.202560   55100 logs.go:284] 0 containers: []
	W0717 13:45:42.202572   55100 logs.go:286] No container was found matching "kindnet"
	I0717 13:45:42.202645   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 13:45:42.222353   55100 logs.go:284] 0 containers: []
	W0717 13:45:42.222367   55100 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 13:45:42.222374   55100 logs.go:123] Gathering logs for kubelet ...
	I0717 13:45:42.222381   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 13:45:42.259868   55100 logs.go:123] Gathering logs for dmesg ...
	I0717 13:45:42.259883   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 13:45:42.273619   55100 logs.go:123] Gathering logs for describe nodes ...
	I0717 13:45:42.273633   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 13:45:42.329779   55100 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 13:45:42.329792   55100 logs.go:123] Gathering logs for Docker ...
	I0717 13:45:42.329799   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 13:45:42.345112   55100 logs.go:123] Gathering logs for container status ...
	I0717 13:45:42.345127   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 13:45:44.897845   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:45:44.909986   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 13:45:44.942805   55100 logs.go:284] 0 containers: []
	W0717 13:45:44.942824   55100 logs.go:286] No container was found matching "kube-apiserver"
	I0717 13:45:44.942932   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 13:45:44.966395   55100 logs.go:284] 0 containers: []
	W0717 13:45:44.966409   55100 logs.go:286] No container was found matching "etcd"
	I0717 13:45:44.966479   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 13:45:44.985127   55100 logs.go:284] 0 containers: []
	W0717 13:45:44.985139   55100 logs.go:286] No container was found matching "coredns"
	I0717 13:45:44.985206   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 13:45:45.003793   55100 logs.go:284] 0 containers: []
	W0717 13:45:45.003806   55100 logs.go:286] No container was found matching "kube-scheduler"
	I0717 13:45:45.003875   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 13:45:45.032697   55100 logs.go:284] 0 containers: []
	W0717 13:45:45.032714   55100 logs.go:286] No container was found matching "kube-proxy"
	I0717 13:45:45.032790   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 13:45:45.052114   55100 logs.go:284] 0 containers: []
	W0717 13:45:45.052135   55100 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 13:45:45.052252   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 13:45:45.074749   55100 logs.go:284] 0 containers: []
	W0717 13:45:45.074763   55100 logs.go:286] No container was found matching "kindnet"
	I0717 13:45:45.074832   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 13:45:45.094994   55100 logs.go:284] 0 containers: []
	W0717 13:45:45.095010   55100 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 13:45:45.095019   55100 logs.go:123] Gathering logs for kubelet ...
	I0717 13:45:45.095027   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 13:45:45.137171   55100 logs.go:123] Gathering logs for dmesg ...
	I0717 13:45:45.137194   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 13:45:45.156853   55100 logs.go:123] Gathering logs for describe nodes ...
	I0717 13:45:45.156877   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 13:45:45.219274   55100 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 13:45:45.219289   55100 logs.go:123] Gathering logs for Docker ...
	I0717 13:45:45.219298   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 13:45:45.239046   55100 logs.go:123] Gathering logs for container status ...
	I0717 13:45:45.239064   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 13:45:47.799916   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:45:47.812111   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 13:45:47.832368   55100 logs.go:284] 0 containers: []
	W0717 13:45:47.832382   55100 logs.go:286] No container was found matching "kube-apiserver"
	I0717 13:45:47.832452   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 13:45:47.852794   55100 logs.go:284] 0 containers: []
	W0717 13:45:47.852808   55100 logs.go:286] No container was found matching "etcd"
	I0717 13:45:47.852876   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 13:45:47.871513   55100 logs.go:284] 0 containers: []
	W0717 13:45:47.871526   55100 logs.go:286] No container was found matching "coredns"
	I0717 13:45:47.871598   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 13:45:47.890513   55100 logs.go:284] 0 containers: []
	W0717 13:45:47.890525   55100 logs.go:286] No container was found matching "kube-scheduler"
	I0717 13:45:47.890594   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 13:45:47.910445   55100 logs.go:284] 0 containers: []
	W0717 13:45:47.910458   55100 logs.go:286] No container was found matching "kube-proxy"
	I0717 13:45:47.910528   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 13:45:47.930041   55100 logs.go:284] 0 containers: []
	W0717 13:45:47.930055   55100 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 13:45:47.930122   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 13:45:47.949308   55100 logs.go:284] 0 containers: []
	W0717 13:45:47.949322   55100 logs.go:286] No container was found matching "kindnet"
	I0717 13:45:47.949392   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 13:45:47.969337   55100 logs.go:284] 0 containers: []
	W0717 13:45:47.969350   55100 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 13:45:47.969356   55100 logs.go:123] Gathering logs for kubelet ...
	I0717 13:45:47.969366   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 13:45:48.010394   55100 logs.go:123] Gathering logs for dmesg ...
	I0717 13:45:48.010409   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 13:45:48.024355   55100 logs.go:123] Gathering logs for describe nodes ...
	I0717 13:45:48.024369   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 13:45:48.080495   55100 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 13:45:48.080508   55100 logs.go:123] Gathering logs for Docker ...
	I0717 13:45:48.080519   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 13:45:48.096563   55100 logs.go:123] Gathering logs for container status ...
	I0717 13:45:48.096580   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 13:45:50.650129   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:45:50.661275   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 13:45:50.681631   55100 logs.go:284] 0 containers: []
	W0717 13:45:50.681645   55100 logs.go:286] No container was found matching "kube-apiserver"
	I0717 13:45:50.681717   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 13:45:50.701955   55100 logs.go:284] 0 containers: []
	W0717 13:45:50.701969   55100 logs.go:286] No container was found matching "etcd"
	I0717 13:45:50.702055   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 13:45:50.723148   55100 logs.go:284] 0 containers: []
	W0717 13:45:50.723162   55100 logs.go:286] No container was found matching "coredns"
	I0717 13:45:50.723232   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 13:45:50.743503   55100 logs.go:284] 0 containers: []
	W0717 13:45:50.743516   55100 logs.go:286] No container was found matching "kube-scheduler"
	I0717 13:45:50.743585   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 13:45:50.763648   55100 logs.go:284] 0 containers: []
	W0717 13:45:50.763662   55100 logs.go:286] No container was found matching "kube-proxy"
	I0717 13:45:50.763732   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 13:45:50.782744   55100 logs.go:284] 0 containers: []
	W0717 13:45:50.782757   55100 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 13:45:50.782832   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 13:45:50.801699   55100 logs.go:284] 0 containers: []
	W0717 13:45:50.801713   55100 logs.go:286] No container was found matching "kindnet"
	I0717 13:45:50.801786   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 13:45:50.822447   55100 logs.go:284] 0 containers: []
	W0717 13:45:50.822462   55100 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 13:45:50.822469   55100 logs.go:123] Gathering logs for container status ...
	I0717 13:45:50.822476   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 13:45:50.873257   55100 logs.go:123] Gathering logs for kubelet ...
	I0717 13:45:50.873272   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 13:45:50.914351   55100 logs.go:123] Gathering logs for dmesg ...
	I0717 13:45:50.914369   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 13:45:50.929571   55100 logs.go:123] Gathering logs for describe nodes ...
	I0717 13:45:50.929587   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 13:45:50.989951   55100 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 13:45:50.989967   55100 logs.go:123] Gathering logs for Docker ...
	I0717 13:45:50.989977   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 13:45:53.506238   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:45:53.519045   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 13:45:53.538565   55100 logs.go:284] 0 containers: []
	W0717 13:45:53.538582   55100 logs.go:286] No container was found matching "kube-apiserver"
	I0717 13:45:53.538668   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 13:45:53.559047   55100 logs.go:284] 0 containers: []
	W0717 13:45:53.559067   55100 logs.go:286] No container was found matching "etcd"
	I0717 13:45:53.559147   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 13:45:53.582987   55100 logs.go:284] 0 containers: []
	W0717 13:45:53.583004   55100 logs.go:286] No container was found matching "coredns"
	I0717 13:45:53.583082   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 13:45:53.602905   55100 logs.go:284] 0 containers: []
	W0717 13:45:53.602930   55100 logs.go:286] No container was found matching "kube-scheduler"
	I0717 13:45:53.603012   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 13:45:53.622985   55100 logs.go:284] 0 containers: []
	W0717 13:45:53.623001   55100 logs.go:286] No container was found matching "kube-proxy"
	I0717 13:45:53.623098   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 13:45:53.645504   55100 logs.go:284] 0 containers: []
	W0717 13:45:53.645520   55100 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 13:45:53.645601   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 13:45:53.669335   55100 logs.go:284] 0 containers: []
	W0717 13:45:53.669356   55100 logs.go:286] No container was found matching "kindnet"
	I0717 13:45:53.669442   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 13:45:53.690334   55100 logs.go:284] 0 containers: []
	W0717 13:45:53.690347   55100 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 13:45:53.690354   55100 logs.go:123] Gathering logs for kubelet ...
	I0717 13:45:53.690362   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 13:45:53.732486   55100 logs.go:123] Gathering logs for dmesg ...
	I0717 13:45:53.732506   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 13:45:53.748238   55100 logs.go:123] Gathering logs for describe nodes ...
	I0717 13:45:53.748259   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 13:45:53.807965   55100 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 13:45:53.807985   55100 logs.go:123] Gathering logs for Docker ...
	I0717 13:45:53.807992   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 13:45:53.824232   55100 logs.go:123] Gathering logs for container status ...
	I0717 13:45:53.824249   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 13:45:56.384517   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:45:56.395093   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 13:45:56.414035   55100 logs.go:284] 0 containers: []
	W0717 13:45:56.414049   55100 logs.go:286] No container was found matching "kube-apiserver"
	I0717 13:45:56.414134   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 13:45:56.435928   55100 logs.go:284] 0 containers: []
	W0717 13:45:56.435942   55100 logs.go:286] No container was found matching "etcd"
	I0717 13:45:56.436043   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 13:45:56.456585   55100 logs.go:284] 0 containers: []
	W0717 13:45:56.456602   55100 logs.go:286] No container was found matching "coredns"
	I0717 13:45:56.456691   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 13:45:56.478764   55100 logs.go:284] 0 containers: []
	W0717 13:45:56.478782   55100 logs.go:286] No container was found matching "kube-scheduler"
	I0717 13:45:56.478871   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 13:45:56.499537   55100 logs.go:284] 0 containers: []
	W0717 13:45:56.499551   55100 logs.go:286] No container was found matching "kube-proxy"
	I0717 13:45:56.499626   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 13:45:56.520894   55100 logs.go:284] 0 containers: []
	W0717 13:45:56.520926   55100 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 13:45:56.521047   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 13:45:56.544597   55100 logs.go:284] 0 containers: []
	W0717 13:45:56.544612   55100 logs.go:286] No container was found matching "kindnet"
	I0717 13:45:56.544689   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 13:45:56.566135   55100 logs.go:284] 0 containers: []
	W0717 13:45:56.566148   55100 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 13:45:56.566155   55100 logs.go:123] Gathering logs for kubelet ...
	I0717 13:45:56.566162   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 13:45:56.609028   55100 logs.go:123] Gathering logs for dmesg ...
	I0717 13:45:56.609043   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 13:45:56.624037   55100 logs.go:123] Gathering logs for describe nodes ...
	I0717 13:45:56.624052   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 13:45:56.693454   55100 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 13:45:56.693466   55100 logs.go:123] Gathering logs for Docker ...
	I0717 13:45:56.693472   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 13:45:56.708714   55100 logs.go:123] Gathering logs for container status ...
	I0717 13:45:56.708727   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 13:45:59.267137   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:45:59.278544   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 13:45:59.298200   55100 logs.go:284] 0 containers: []
	W0717 13:45:59.298219   55100 logs.go:286] No container was found matching "kube-apiserver"
	I0717 13:45:59.298297   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 13:45:59.319813   55100 logs.go:284] 0 containers: []
	W0717 13:45:59.319827   55100 logs.go:286] No container was found matching "etcd"
	I0717 13:45:59.319901   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 13:45:59.342051   55100 logs.go:284] 0 containers: []
	W0717 13:45:59.342066   55100 logs.go:286] No container was found matching "coredns"
	I0717 13:45:59.342136   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 13:45:59.362031   55100 logs.go:284] 0 containers: []
	W0717 13:45:59.362049   55100 logs.go:286] No container was found matching "kube-scheduler"
	I0717 13:45:59.362152   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 13:45:59.388731   55100 logs.go:284] 0 containers: []
	W0717 13:45:59.388750   55100 logs.go:286] No container was found matching "kube-proxy"
	I0717 13:45:59.388832   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 13:45:59.428907   55100 logs.go:284] 0 containers: []
	W0717 13:45:59.428922   55100 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 13:45:59.429011   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 13:45:59.449800   55100 logs.go:284] 0 containers: []
	W0717 13:45:59.449817   55100 logs.go:286] No container was found matching "kindnet"
	I0717 13:45:59.449905   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 13:45:59.480031   55100 logs.go:284] 0 containers: []
	W0717 13:45:59.480055   55100 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 13:45:59.480068   55100 logs.go:123] Gathering logs for kubelet ...
	I0717 13:45:59.480083   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 13:45:59.522354   55100 logs.go:123] Gathering logs for dmesg ...
	I0717 13:45:59.522374   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 13:45:59.538294   55100 logs.go:123] Gathering logs for describe nodes ...
	I0717 13:45:59.538315   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 13:45:59.612966   55100 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 13:45:59.612996   55100 logs.go:123] Gathering logs for Docker ...
	I0717 13:45:59.613008   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 13:45:59.630534   55100 logs.go:123] Gathering logs for container status ...
	I0717 13:45:59.630549   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 13:46:02.191279   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:46:02.203611   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 13:46:02.222703   55100 logs.go:284] 0 containers: []
	W0717 13:46:02.222717   55100 logs.go:286] No container was found matching "kube-apiserver"
	I0717 13:46:02.222778   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 13:46:02.243269   55100 logs.go:284] 0 containers: []
	W0717 13:46:02.243283   55100 logs.go:286] No container was found matching "etcd"
	I0717 13:46:02.243348   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 13:46:02.261627   55100 logs.go:284] 0 containers: []
	W0717 13:46:02.261639   55100 logs.go:286] No container was found matching "coredns"
	I0717 13:46:02.261717   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 13:46:02.280648   55100 logs.go:284] 0 containers: []
	W0717 13:46:02.280661   55100 logs.go:286] No container was found matching "kube-scheduler"
	I0717 13:46:02.280726   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 13:46:02.300722   55100 logs.go:284] 0 containers: []
	W0717 13:46:02.300734   55100 logs.go:286] No container was found matching "kube-proxy"
	I0717 13:46:02.300800   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 13:46:02.320517   55100 logs.go:284] 0 containers: []
	W0717 13:46:02.320531   55100 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 13:46:02.320608   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 13:46:02.341039   55100 logs.go:284] 0 containers: []
	W0717 13:46:02.341052   55100 logs.go:286] No container was found matching "kindnet"
	I0717 13:46:02.341115   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 13:46:02.362304   55100 logs.go:284] 0 containers: []
	W0717 13:46:02.362315   55100 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 13:46:02.362322   55100 logs.go:123] Gathering logs for kubelet ...
	I0717 13:46:02.362329   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 13:46:02.404026   55100 logs.go:123] Gathering logs for dmesg ...
	I0717 13:46:02.404043   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 13:46:02.426566   55100 logs.go:123] Gathering logs for describe nodes ...
	I0717 13:46:02.426581   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 13:46:02.482409   55100 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 13:46:02.482422   55100 logs.go:123] Gathering logs for Docker ...
	I0717 13:46:02.482428   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 13:46:02.497798   55100 logs.go:123] Gathering logs for container status ...
	I0717 13:46:02.497813   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 13:46:05.049314   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:46:05.059842   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 13:46:05.078358   55100 logs.go:284] 0 containers: []
	W0717 13:46:05.078372   55100 logs.go:286] No container was found matching "kube-apiserver"
	I0717 13:46:05.078439   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 13:46:05.096611   55100 logs.go:284] 0 containers: []
	W0717 13:46:05.096632   55100 logs.go:286] No container was found matching "etcd"
	I0717 13:46:05.096701   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 13:46:05.116014   55100 logs.go:284] 0 containers: []
	W0717 13:46:05.116027   55100 logs.go:286] No container was found matching "coredns"
	I0717 13:46:05.116094   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 13:46:05.135289   55100 logs.go:284] 0 containers: []
	W0717 13:46:05.135302   55100 logs.go:286] No container was found matching "kube-scheduler"
	I0717 13:46:05.135381   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 13:46:05.154183   55100 logs.go:284] 0 containers: []
	W0717 13:46:05.154198   55100 logs.go:286] No container was found matching "kube-proxy"
	I0717 13:46:05.154267   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 13:46:05.173355   55100 logs.go:284] 0 containers: []
	W0717 13:46:05.173369   55100 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 13:46:05.173437   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 13:46:05.192844   55100 logs.go:284] 0 containers: []
	W0717 13:46:05.192858   55100 logs.go:286] No container was found matching "kindnet"
	I0717 13:46:05.192927   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 13:46:05.212907   55100 logs.go:284] 0 containers: []
	W0717 13:46:05.212920   55100 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 13:46:05.212926   55100 logs.go:123] Gathering logs for Docker ...
	I0717 13:46:05.212934   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 13:46:05.228353   55100 logs.go:123] Gathering logs for container status ...
	I0717 13:46:05.228368   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 13:46:05.279197   55100 logs.go:123] Gathering logs for kubelet ...
	I0717 13:46:05.279211   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 13:46:05.319745   55100 logs.go:123] Gathering logs for dmesg ...
	I0717 13:46:05.319764   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 13:46:05.334186   55100 logs.go:123] Gathering logs for describe nodes ...
	I0717 13:46:05.334202   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 13:46:05.418794   55100 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 13:46:07.920262   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:46:07.932778   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 13:46:07.952216   55100 logs.go:284] 0 containers: []
	W0717 13:46:07.952229   55100 logs.go:286] No container was found matching "kube-apiserver"
	I0717 13:46:07.952296   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 13:46:07.972646   55100 logs.go:284] 0 containers: []
	W0717 13:46:07.972659   55100 logs.go:286] No container was found matching "etcd"
	I0717 13:46:07.972729   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 13:46:07.992302   55100 logs.go:284] 0 containers: []
	W0717 13:46:07.992317   55100 logs.go:286] No container was found matching "coredns"
	I0717 13:46:07.992390   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 13:46:08.012122   55100 logs.go:284] 0 containers: []
	W0717 13:46:08.012135   55100 logs.go:286] No container was found matching "kube-scheduler"
	I0717 13:46:08.012201   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 13:46:08.032634   55100 logs.go:284] 0 containers: []
	W0717 13:46:08.032647   55100 logs.go:286] No container was found matching "kube-proxy"
	I0717 13:46:08.032715   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 13:46:08.051692   55100 logs.go:284] 0 containers: []
	W0717 13:46:08.051706   55100 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 13:46:08.051775   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 13:46:08.071377   55100 logs.go:284] 0 containers: []
	W0717 13:46:08.071390   55100 logs.go:286] No container was found matching "kindnet"
	I0717 13:46:08.071457   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 13:46:08.090618   55100 logs.go:284] 0 containers: []
	W0717 13:46:08.090630   55100 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 13:46:08.090638   55100 logs.go:123] Gathering logs for describe nodes ...
	I0717 13:46:08.090647   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 13:46:08.148252   55100 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 13:46:08.148265   55100 logs.go:123] Gathering logs for Docker ...
	I0717 13:46:08.148273   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 13:46:08.163903   55100 logs.go:123] Gathering logs for container status ...
	I0717 13:46:08.163919   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 13:46:08.214061   55100 logs.go:123] Gathering logs for kubelet ...
	I0717 13:46:08.214075   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 13:46:08.255079   55100 logs.go:123] Gathering logs for dmesg ...
	I0717 13:46:08.255093   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 13:46:10.769949   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:46:10.782676   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 13:46:10.802036   55100 logs.go:284] 0 containers: []
	W0717 13:46:10.802049   55100 logs.go:286] No container was found matching "kube-apiserver"
	I0717 13:46:10.802118   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 13:46:10.821430   55100 logs.go:284] 0 containers: []
	W0717 13:46:10.821444   55100 logs.go:286] No container was found matching "etcd"
	I0717 13:46:10.821512   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 13:46:10.840315   55100 logs.go:284] 0 containers: []
	W0717 13:46:10.840329   55100 logs.go:286] No container was found matching "coredns"
	I0717 13:46:10.840398   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 13:46:10.859389   55100 logs.go:284] 0 containers: []
	W0717 13:46:10.859402   55100 logs.go:286] No container was found matching "kube-scheduler"
	I0717 13:46:10.859474   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 13:46:10.878314   55100 logs.go:284] 0 containers: []
	W0717 13:46:10.878327   55100 logs.go:286] No container was found matching "kube-proxy"
	I0717 13:46:10.878395   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 13:46:10.898229   55100 logs.go:284] 0 containers: []
	W0717 13:46:10.898242   55100 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 13:46:10.898310   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 13:46:10.917422   55100 logs.go:284] 0 containers: []
	W0717 13:46:10.917434   55100 logs.go:286] No container was found matching "kindnet"
	I0717 13:46:10.917504   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 13:46:10.936605   55100 logs.go:284] 0 containers: []
	W0717 13:46:10.936619   55100 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 13:46:10.936626   55100 logs.go:123] Gathering logs for dmesg ...
	I0717 13:46:10.936633   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 13:46:10.950317   55100 logs.go:123] Gathering logs for describe nodes ...
	I0717 13:46:10.950330   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 13:46:11.006899   55100 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 13:46:11.006913   55100 logs.go:123] Gathering logs for Docker ...
	I0717 13:46:11.006922   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 13:46:11.022120   55100 logs.go:123] Gathering logs for container status ...
	I0717 13:46:11.022134   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 13:46:11.074733   55100 logs.go:123] Gathering logs for kubelet ...
	I0717 13:46:11.074748   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 13:46:13.612778   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:46:13.624046   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 13:46:13.643459   55100 logs.go:284] 0 containers: []
	W0717 13:46:13.643473   55100 logs.go:286] No container was found matching "kube-apiserver"
	I0717 13:46:13.643544   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 13:46:13.663437   55100 logs.go:284] 0 containers: []
	W0717 13:46:13.663450   55100 logs.go:286] No container was found matching "etcd"
	I0717 13:46:13.663520   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 13:46:13.682506   55100 logs.go:284] 0 containers: []
	W0717 13:46:13.682519   55100 logs.go:286] No container was found matching "coredns"
	I0717 13:46:13.682588   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 13:46:13.702471   55100 logs.go:284] 0 containers: []
	W0717 13:46:13.702483   55100 logs.go:286] No container was found matching "kube-scheduler"
	I0717 13:46:13.702551   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 13:46:13.721511   55100 logs.go:284] 0 containers: []
	W0717 13:46:13.721524   55100 logs.go:286] No container was found matching "kube-proxy"
	I0717 13:46:13.721591   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 13:46:13.740895   55100 logs.go:284] 0 containers: []
	W0717 13:46:13.740908   55100 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 13:46:13.741001   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 13:46:13.759956   55100 logs.go:284] 0 containers: []
	W0717 13:46:13.759970   55100 logs.go:286] No container was found matching "kindnet"
	I0717 13:46:13.760041   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 13:46:13.779885   55100 logs.go:284] 0 containers: []
	W0717 13:46:13.779901   55100 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 13:46:13.779911   55100 logs.go:123] Gathering logs for dmesg ...
	I0717 13:46:13.779921   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 13:46:13.793490   55100 logs.go:123] Gathering logs for describe nodes ...
	I0717 13:46:13.793502   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 13:46:13.848951   55100 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 13:46:13.848967   55100 logs.go:123] Gathering logs for Docker ...
	I0717 13:46:13.848975   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 13:46:13.864110   55100 logs.go:123] Gathering logs for container status ...
	I0717 13:46:13.864124   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 13:46:13.913494   55100 logs.go:123] Gathering logs for kubelet ...
	I0717 13:46:13.913509   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 13:46:16.455212   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:46:16.467385   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 13:46:16.486896   55100 logs.go:284] 0 containers: []
	W0717 13:46:16.486915   55100 logs.go:286] No container was found matching "kube-apiserver"
	I0717 13:46:16.486997   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 13:46:16.505623   55100 logs.go:284] 0 containers: []
	W0717 13:46:16.505636   55100 logs.go:286] No container was found matching "etcd"
	I0717 13:46:16.505710   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 13:46:16.524199   55100 logs.go:284] 0 containers: []
	W0717 13:46:16.524214   55100 logs.go:286] No container was found matching "coredns"
	I0717 13:46:16.524319   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 13:46:16.543888   55100 logs.go:284] 0 containers: []
	W0717 13:46:16.543904   55100 logs.go:286] No container was found matching "kube-scheduler"
	I0717 13:46:16.543994   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 13:46:16.564092   55100 logs.go:284] 0 containers: []
	W0717 13:46:16.564107   55100 logs.go:286] No container was found matching "kube-proxy"
	I0717 13:46:16.564178   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 13:46:16.583394   55100 logs.go:284] 0 containers: []
	W0717 13:46:16.583408   55100 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 13:46:16.583480   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 13:46:16.603297   55100 logs.go:284] 0 containers: []
	W0717 13:46:16.603310   55100 logs.go:286] No container was found matching "kindnet"
	I0717 13:46:16.603381   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 13:46:16.630637   55100 logs.go:284] 0 containers: []
	W0717 13:46:16.630650   55100 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 13:46:16.630657   55100 logs.go:123] Gathering logs for dmesg ...
	I0717 13:46:16.630664   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 13:46:16.644446   55100 logs.go:123] Gathering logs for describe nodes ...
	I0717 13:46:16.644463   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 13:46:16.699979   55100 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 13:46:16.699993   55100 logs.go:123] Gathering logs for Docker ...
	I0717 13:46:16.700001   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 13:46:16.715030   55100 logs.go:123] Gathering logs for container status ...
	I0717 13:46:16.715043   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 13:46:16.767792   55100 logs.go:123] Gathering logs for kubelet ...
	I0717 13:46:16.767806   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 13:46:19.307709   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:46:19.320239   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 13:46:19.339305   55100 logs.go:284] 0 containers: []
	W0717 13:46:19.339318   55100 logs.go:286] No container was found matching "kube-apiserver"
	I0717 13:46:19.339387   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 13:46:19.358320   55100 logs.go:284] 0 containers: []
	W0717 13:46:19.358332   55100 logs.go:286] No container was found matching "etcd"
	I0717 13:46:19.358393   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 13:46:19.377840   55100 logs.go:284] 0 containers: []
	W0717 13:46:19.377853   55100 logs.go:286] No container was found matching "coredns"
	I0717 13:46:19.377920   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 13:46:19.397886   55100 logs.go:284] 0 containers: []
	W0717 13:46:19.397899   55100 logs.go:286] No container was found matching "kube-scheduler"
	I0717 13:46:19.397975   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 13:46:19.417792   55100 logs.go:284] 0 containers: []
	W0717 13:46:19.417805   55100 logs.go:286] No container was found matching "kube-proxy"
	I0717 13:46:19.417873   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 13:46:19.438231   55100 logs.go:284] 0 containers: []
	W0717 13:46:19.438244   55100 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 13:46:19.438314   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 13:46:19.457679   55100 logs.go:284] 0 containers: []
	W0717 13:46:19.457697   55100 logs.go:286] No container was found matching "kindnet"
	I0717 13:46:19.457769   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 13:46:19.476793   55100 logs.go:284] 0 containers: []
	W0717 13:46:19.476806   55100 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 13:46:19.476812   55100 logs.go:123] Gathering logs for container status ...
	I0717 13:46:19.476820   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 13:46:19.529120   55100 logs.go:123] Gathering logs for kubelet ...
	I0717 13:46:19.529134   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 13:46:19.566700   55100 logs.go:123] Gathering logs for dmesg ...
	I0717 13:46:19.566717   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 13:46:19.580988   55100 logs.go:123] Gathering logs for describe nodes ...
	I0717 13:46:19.581006   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 13:46:19.638351   55100 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 13:46:19.638364   55100 logs.go:123] Gathering logs for Docker ...
	I0717 13:46:19.638372   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 13:46:22.154836   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:46:22.166995   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 13:46:22.188450   55100 logs.go:284] 0 containers: []
	W0717 13:46:22.188463   55100 logs.go:286] No container was found matching "kube-apiserver"
	I0717 13:46:22.188519   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 13:46:22.207768   55100 logs.go:284] 0 containers: []
	W0717 13:46:22.207781   55100 logs.go:286] No container was found matching "etcd"
	I0717 13:46:22.207848   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 13:46:22.227273   55100 logs.go:284] 0 containers: []
	W0717 13:46:22.227287   55100 logs.go:286] No container was found matching "coredns"
	I0717 13:46:22.227356   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 13:46:22.247452   55100 logs.go:284] 0 containers: []
	W0717 13:46:22.247465   55100 logs.go:286] No container was found matching "kube-scheduler"
	I0717 13:46:22.247531   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 13:46:22.266374   55100 logs.go:284] 0 containers: []
	W0717 13:46:22.266388   55100 logs.go:286] No container was found matching "kube-proxy"
	I0717 13:46:22.266456   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 13:46:22.285790   55100 logs.go:284] 0 containers: []
	W0717 13:46:22.285803   55100 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 13:46:22.285873   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 13:46:22.305523   55100 logs.go:284] 0 containers: []
	W0717 13:46:22.305537   55100 logs.go:286] No container was found matching "kindnet"
	I0717 13:46:22.305606   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 13:46:22.325494   55100 logs.go:284] 0 containers: []
	W0717 13:46:22.325508   55100 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 13:46:22.325516   55100 logs.go:123] Gathering logs for kubelet ...
	I0717 13:46:22.325524   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 13:46:22.364390   55100 logs.go:123] Gathering logs for dmesg ...
	I0717 13:46:22.364404   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 13:46:22.378050   55100 logs.go:123] Gathering logs for describe nodes ...
	I0717 13:46:22.378064   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 13:46:22.433109   55100 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 13:46:22.433122   55100 logs.go:123] Gathering logs for Docker ...
	I0717 13:46:22.433131   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 13:46:22.448620   55100 logs.go:123] Gathering logs for container status ...
	I0717 13:46:22.448633   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 13:46:25.000531   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:46:25.012963   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 13:46:25.031370   55100 logs.go:284] 0 containers: []
	W0717 13:46:25.031382   55100 logs.go:286] No container was found matching "kube-apiserver"
	I0717 13:46:25.031464   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 13:46:25.052247   55100 logs.go:284] 0 containers: []
	W0717 13:46:25.052261   55100 logs.go:286] No container was found matching "etcd"
	I0717 13:46:25.052336   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 13:46:25.073589   55100 logs.go:284] 0 containers: []
	W0717 13:46:25.073601   55100 logs.go:286] No container was found matching "coredns"
	I0717 13:46:25.073671   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 13:46:25.093596   55100 logs.go:284] 0 containers: []
	W0717 13:46:25.093610   55100 logs.go:286] No container was found matching "kube-scheduler"
	I0717 13:46:25.093680   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 13:46:25.112611   55100 logs.go:284] 0 containers: []
	W0717 13:46:25.112624   55100 logs.go:286] No container was found matching "kube-proxy"
	I0717 13:46:25.112697   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 13:46:25.132977   55100 logs.go:284] 0 containers: []
	W0717 13:46:25.132992   55100 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 13:46:25.133062   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 13:46:25.154791   55100 logs.go:284] 0 containers: []
	W0717 13:46:25.154804   55100 logs.go:286] No container was found matching "kindnet"
	I0717 13:46:25.154886   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 13:46:25.175128   55100 logs.go:284] 0 containers: []
	W0717 13:46:25.175153   55100 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 13:46:25.175165   55100 logs.go:123] Gathering logs for kubelet ...
	I0717 13:46:25.175177   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 13:46:25.217623   55100 logs.go:123] Gathering logs for dmesg ...
	I0717 13:46:25.217658   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 13:46:25.232708   55100 logs.go:123] Gathering logs for describe nodes ...
	I0717 13:46:25.232723   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 13:46:25.294654   55100 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 13:46:25.294666   55100 logs.go:123] Gathering logs for Docker ...
	I0717 13:46:25.294674   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 13:46:25.312207   55100 logs.go:123] Gathering logs for container status ...
	I0717 13:46:25.312224   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 13:46:27.877995   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:46:27.888730   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 13:46:27.907101   55100 logs.go:284] 0 containers: []
	W0717 13:46:27.907114   55100 logs.go:286] No container was found matching "kube-apiserver"
	I0717 13:46:27.907182   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 13:46:27.926233   55100 logs.go:284] 0 containers: []
	W0717 13:46:27.926246   55100 logs.go:286] No container was found matching "etcd"
	I0717 13:46:27.926316   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 13:46:27.944942   55100 logs.go:284] 0 containers: []
	W0717 13:46:27.944956   55100 logs.go:286] No container was found matching "coredns"
	I0717 13:46:27.945027   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 13:46:27.963565   55100 logs.go:284] 0 containers: []
	W0717 13:46:27.963577   55100 logs.go:286] No container was found matching "kube-scheduler"
	I0717 13:46:27.963648   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 13:46:27.982652   55100 logs.go:284] 0 containers: []
	W0717 13:46:27.982667   55100 logs.go:286] No container was found matching "kube-proxy"
	I0717 13:46:27.982734   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 13:46:28.002545   55100 logs.go:284] 0 containers: []
	W0717 13:46:28.002558   55100 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 13:46:28.002629   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 13:46:28.021116   55100 logs.go:284] 0 containers: []
	W0717 13:46:28.021128   55100 logs.go:286] No container was found matching "kindnet"
	I0717 13:46:28.021198   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 13:46:28.040047   55100 logs.go:284] 0 containers: []
	W0717 13:46:28.040061   55100 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 13:46:28.040068   55100 logs.go:123] Gathering logs for container status ...
	I0717 13:46:28.040075   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 13:46:28.090365   55100 logs.go:123] Gathering logs for kubelet ...
	I0717 13:46:28.090379   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 13:46:28.129181   55100 logs.go:123] Gathering logs for dmesg ...
	I0717 13:46:28.129213   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 13:46:28.143355   55100 logs.go:123] Gathering logs for describe nodes ...
	I0717 13:46:28.143371   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 13:46:28.198268   55100 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 13:46:28.198280   55100 logs.go:123] Gathering logs for Docker ...
	I0717 13:46:28.198287   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 13:46:30.713907   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:46:30.724915   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 13:46:30.747634   55100 logs.go:284] 0 containers: []
	W0717 13:46:30.747651   55100 logs.go:286] No container was found matching "kube-apiserver"
	I0717 13:46:30.747731   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 13:46:30.776229   55100 logs.go:284] 0 containers: []
	W0717 13:46:30.776242   55100 logs.go:286] No container was found matching "etcd"
	I0717 13:46:30.776307   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 13:46:30.797856   55100 logs.go:284] 0 containers: []
	W0717 13:46:30.797870   55100 logs.go:286] No container was found matching "coredns"
	I0717 13:46:30.797942   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 13:46:30.819522   55100 logs.go:284] 0 containers: []
	W0717 13:46:30.819542   55100 logs.go:286] No container was found matching "kube-scheduler"
	I0717 13:46:30.819660   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 13:46:30.838759   55100 logs.go:284] 0 containers: []
	W0717 13:46:30.838777   55100 logs.go:286] No container was found matching "kube-proxy"
	I0717 13:46:30.838871   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 13:46:30.862531   55100 logs.go:284] 0 containers: []
	W0717 13:46:30.862548   55100 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 13:46:30.862626   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 13:46:30.886436   55100 logs.go:284] 0 containers: []
	W0717 13:46:30.886457   55100 logs.go:286] No container was found matching "kindnet"
	I0717 13:46:30.886541   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 13:46:30.919397   55100 logs.go:284] 0 containers: []
	W0717 13:46:30.919415   55100 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 13:46:30.919439   55100 logs.go:123] Gathering logs for kubelet ...
	I0717 13:46:30.919458   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 13:46:30.961186   55100 logs.go:123] Gathering logs for dmesg ...
	I0717 13:46:30.961209   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 13:46:30.977283   55100 logs.go:123] Gathering logs for describe nodes ...
	I0717 13:46:30.977301   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 13:46:31.042538   55100 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 13:46:31.042554   55100 logs.go:123] Gathering logs for Docker ...
	I0717 13:46:31.042560   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 13:46:31.059676   55100 logs.go:123] Gathering logs for container status ...
	I0717 13:46:31.059688   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 13:46:33.611526   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:46:33.624046   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 13:46:33.643529   55100 logs.go:284] 0 containers: []
	W0717 13:46:33.643557   55100 logs.go:286] No container was found matching "kube-apiserver"
	I0717 13:46:33.643630   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 13:46:33.663736   55100 logs.go:284] 0 containers: []
	W0717 13:46:33.663748   55100 logs.go:286] No container was found matching "etcd"
	I0717 13:46:33.663813   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 13:46:33.682913   55100 logs.go:284] 0 containers: []
	W0717 13:46:33.682929   55100 logs.go:286] No container was found matching "coredns"
	I0717 13:46:33.682999   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 13:46:33.701433   55100 logs.go:284] 0 containers: []
	W0717 13:46:33.701447   55100 logs.go:286] No container was found matching "kube-scheduler"
	I0717 13:46:33.701516   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 13:46:33.719954   55100 logs.go:284] 0 containers: []
	W0717 13:46:33.719967   55100 logs.go:286] No container was found matching "kube-proxy"
	I0717 13:46:33.720031   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 13:46:33.739111   55100 logs.go:284] 0 containers: []
	W0717 13:46:33.739125   55100 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 13:46:33.739193   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 13:46:33.758008   55100 logs.go:284] 0 containers: []
	W0717 13:46:33.758022   55100 logs.go:286] No container was found matching "kindnet"
	I0717 13:46:33.758090   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 13:46:33.777988   55100 logs.go:284] 0 containers: []
	W0717 13:46:33.778002   55100 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 13:46:33.778009   55100 logs.go:123] Gathering logs for kubelet ...
	I0717 13:46:33.778016   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 13:46:33.815858   55100 logs.go:123] Gathering logs for dmesg ...
	I0717 13:46:33.815872   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 13:46:33.829751   55100 logs.go:123] Gathering logs for describe nodes ...
	I0717 13:46:33.829767   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 13:46:33.922526   55100 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 13:46:33.922537   55100 logs.go:123] Gathering logs for Docker ...
	I0717 13:46:33.922544   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 13:46:33.937869   55100 logs.go:123] Gathering logs for container status ...
	I0717 13:46:33.937882   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 13:46:36.489973   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:46:36.502458   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 13:46:36.521292   55100 logs.go:284] 0 containers: []
	W0717 13:46:36.521306   55100 logs.go:286] No container was found matching "kube-apiserver"
	I0717 13:46:36.521378   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 13:46:36.539953   55100 logs.go:284] 0 containers: []
	W0717 13:46:36.539966   55100 logs.go:286] No container was found matching "etcd"
	I0717 13:46:36.540032   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 13:46:36.559635   55100 logs.go:284] 0 containers: []
	W0717 13:46:36.559647   55100 logs.go:286] No container was found matching "coredns"
	I0717 13:46:36.559713   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 13:46:36.578283   55100 logs.go:284] 0 containers: []
	W0717 13:46:36.578297   55100 logs.go:286] No container was found matching "kube-scheduler"
	I0717 13:46:36.578365   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 13:46:36.597965   55100 logs.go:284] 0 containers: []
	W0717 13:46:36.597979   55100 logs.go:286] No container was found matching "kube-proxy"
	I0717 13:46:36.598048   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 13:46:36.617245   55100 logs.go:284] 0 containers: []
	W0717 13:46:36.617258   55100 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 13:46:36.617332   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 13:46:36.636292   55100 logs.go:284] 0 containers: []
	W0717 13:46:36.636305   55100 logs.go:286] No container was found matching "kindnet"
	I0717 13:46:36.636373   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 13:46:36.656326   55100 logs.go:284] 0 containers: []
	W0717 13:46:36.656338   55100 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 13:46:36.656345   55100 logs.go:123] Gathering logs for kubelet ...
	I0717 13:46:36.656352   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 13:46:36.696542   55100 logs.go:123] Gathering logs for dmesg ...
	I0717 13:46:36.696556   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 13:46:36.710282   55100 logs.go:123] Gathering logs for describe nodes ...
	I0717 13:46:36.710295   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 13:46:36.765199   55100 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 13:46:36.765212   55100 logs.go:123] Gathering logs for Docker ...
	I0717 13:46:36.765219   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 13:46:36.780714   55100 logs.go:123] Gathering logs for container status ...
	I0717 13:46:36.780731   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 13:46:39.333022   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:46:39.343709   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 13:46:39.362610   55100 logs.go:284] 0 containers: []
	W0717 13:46:39.362623   55100 logs.go:286] No container was found matching "kube-apiserver"
	I0717 13:46:39.362685   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 13:46:39.382794   55100 logs.go:284] 0 containers: []
	W0717 13:46:39.382806   55100 logs.go:286] No container was found matching "etcd"
	I0717 13:46:39.382877   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 13:46:39.401748   55100 logs.go:284] 0 containers: []
	W0717 13:46:39.401762   55100 logs.go:286] No container was found matching "coredns"
	I0717 13:46:39.401830   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 13:46:39.420766   55100 logs.go:284] 0 containers: []
	W0717 13:46:39.420780   55100 logs.go:286] No container was found matching "kube-scheduler"
	I0717 13:46:39.420848   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 13:46:39.439128   55100 logs.go:284] 0 containers: []
	W0717 13:46:39.439142   55100 logs.go:286] No container was found matching "kube-proxy"
	I0717 13:46:39.439210   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 13:46:39.457743   55100 logs.go:284] 0 containers: []
	W0717 13:46:39.457756   55100 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 13:46:39.457823   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 13:46:39.476846   55100 logs.go:284] 0 containers: []
	W0717 13:46:39.476862   55100 logs.go:286] No container was found matching "kindnet"
	I0717 13:46:39.476940   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 13:46:39.496475   55100 logs.go:284] 0 containers: []
	W0717 13:46:39.496489   55100 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 13:46:39.496496   55100 logs.go:123] Gathering logs for kubelet ...
	I0717 13:46:39.496503   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 13:46:39.533765   55100 logs.go:123] Gathering logs for dmesg ...
	I0717 13:46:39.533779   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 13:46:39.547153   55100 logs.go:123] Gathering logs for describe nodes ...
	I0717 13:46:39.547166   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 13:46:39.603416   55100 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 13:46:39.603429   55100 logs.go:123] Gathering logs for Docker ...
	I0717 13:46:39.603435   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 13:46:39.619030   55100 logs.go:123] Gathering logs for container status ...
	I0717 13:46:39.619044   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 13:46:42.169296   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:46:42.179861   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 13:46:42.198221   55100 logs.go:284] 0 containers: []
	W0717 13:46:42.198234   55100 logs.go:286] No container was found matching "kube-apiserver"
	I0717 13:46:42.198301   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 13:46:42.217279   55100 logs.go:284] 0 containers: []
	W0717 13:46:42.217293   55100 logs.go:286] No container was found matching "etcd"
	I0717 13:46:42.217361   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 13:46:42.236465   55100 logs.go:284] 0 containers: []
	W0717 13:46:42.236479   55100 logs.go:286] No container was found matching "coredns"
	I0717 13:46:42.236548   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 13:46:42.256514   55100 logs.go:284] 0 containers: []
	W0717 13:46:42.256527   55100 logs.go:286] No container was found matching "kube-scheduler"
	I0717 13:46:42.256586   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 13:46:42.275454   55100 logs.go:284] 0 containers: []
	W0717 13:46:42.275467   55100 logs.go:286] No container was found matching "kube-proxy"
	I0717 13:46:42.275522   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 13:46:42.296411   55100 logs.go:284] 0 containers: []
	W0717 13:46:42.296424   55100 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 13:46:42.296479   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 13:46:42.317259   55100 logs.go:284] 0 containers: []
	W0717 13:46:42.317271   55100 logs.go:286] No container was found matching "kindnet"
	I0717 13:46:42.317329   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 13:46:42.337060   55100 logs.go:284] 0 containers: []
	W0717 13:46:42.337073   55100 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 13:46:42.337080   55100 logs.go:123] Gathering logs for kubelet ...
	I0717 13:46:42.337088   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 13:46:42.378914   55100 logs.go:123] Gathering logs for dmesg ...
	I0717 13:46:42.378930   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 13:46:42.392780   55100 logs.go:123] Gathering logs for describe nodes ...
	I0717 13:46:42.392795   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 13:46:42.448139   55100 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 13:46:42.448152   55100 logs.go:123] Gathering logs for Docker ...
	I0717 13:46:42.448159   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 13:46:42.463297   55100 logs.go:123] Gathering logs for container status ...
	I0717 13:46:42.463311   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 13:46:45.014581   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:46:45.026724   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 13:46:45.047354   55100 logs.go:284] 0 containers: []
	W0717 13:46:45.047373   55100 logs.go:286] No container was found matching "kube-apiserver"
	I0717 13:46:45.047461   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 13:46:45.073712   55100 logs.go:284] 0 containers: []
	W0717 13:46:45.073727   55100 logs.go:286] No container was found matching "etcd"
	I0717 13:46:45.073799   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 13:46:45.092920   55100 logs.go:284] 0 containers: []
	W0717 13:46:45.092933   55100 logs.go:286] No container was found matching "coredns"
	I0717 13:46:45.093007   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 13:46:45.111480   55100 logs.go:284] 0 containers: []
	W0717 13:46:45.111497   55100 logs.go:286] No container was found matching "kube-scheduler"
	I0717 13:46:45.111573   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 13:46:45.150808   55100 logs.go:284] 0 containers: []
	W0717 13:46:45.150824   55100 logs.go:286] No container was found matching "kube-proxy"
	I0717 13:46:45.150914   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 13:46:45.174738   55100 logs.go:284] 0 containers: []
	W0717 13:46:45.174751   55100 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 13:46:45.174818   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 13:46:45.193307   55100 logs.go:284] 0 containers: []
	W0717 13:46:45.193320   55100 logs.go:286] No container was found matching "kindnet"
	I0717 13:46:45.193389   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 13:46:45.211938   55100 logs.go:284] 0 containers: []
	W0717 13:46:45.211951   55100 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 13:46:45.211957   55100 logs.go:123] Gathering logs for kubelet ...
	I0717 13:46:45.211965   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 13:46:45.258968   55100 logs.go:123] Gathering logs for dmesg ...
	I0717 13:46:45.258989   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 13:46:45.275955   55100 logs.go:123] Gathering logs for describe nodes ...
	I0717 13:46:45.275971   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 13:46:45.334126   55100 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 13:46:45.334138   55100 logs.go:123] Gathering logs for Docker ...
	I0717 13:46:45.334146   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 13:46:45.352541   55100 logs.go:123] Gathering logs for container status ...
	I0717 13:46:45.352556   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 13:46:47.907286   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:46:47.919090   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 13:46:47.937263   55100 logs.go:284] 0 containers: []
	W0717 13:46:47.937277   55100 logs.go:286] No container was found matching "kube-apiserver"
	I0717 13:46:47.937346   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 13:46:47.955749   55100 logs.go:284] 0 containers: []
	W0717 13:46:47.955762   55100 logs.go:286] No container was found matching "etcd"
	I0717 13:46:47.955828   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 13:46:47.975098   55100 logs.go:284] 0 containers: []
	W0717 13:46:47.975111   55100 logs.go:286] No container was found matching "coredns"
	I0717 13:46:47.975178   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 13:46:47.994379   55100 logs.go:284] 0 containers: []
	W0717 13:46:47.994393   55100 logs.go:286] No container was found matching "kube-scheduler"
	I0717 13:46:47.994462   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 13:46:48.014164   55100 logs.go:284] 0 containers: []
	W0717 13:46:48.014177   55100 logs.go:286] No container was found matching "kube-proxy"
	I0717 13:46:48.014247   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 13:46:48.032911   55100 logs.go:284] 0 containers: []
	W0717 13:46:48.032924   55100 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 13:46:48.032990   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 13:46:48.054982   55100 logs.go:284] 0 containers: []
	W0717 13:46:48.054997   55100 logs.go:286] No container was found matching "kindnet"
	I0717 13:46:48.055069   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 13:46:48.073913   55100 logs.go:284] 0 containers: []
	W0717 13:46:48.073927   55100 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 13:46:48.073934   55100 logs.go:123] Gathering logs for kubelet ...
	I0717 13:46:48.073942   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 13:46:48.117415   55100 logs.go:123] Gathering logs for dmesg ...
	I0717 13:46:48.117441   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 13:46:48.133803   55100 logs.go:123] Gathering logs for describe nodes ...
	I0717 13:46:48.133819   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 13:46:48.202577   55100 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 13:46:48.202591   55100 logs.go:123] Gathering logs for Docker ...
	I0717 13:46:48.202598   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 13:46:48.218936   55100 logs.go:123] Gathering logs for container status ...
	I0717 13:46:48.218955   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 13:46:50.783234   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:46:50.795692   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 13:46:50.814941   55100 logs.go:284] 0 containers: []
	W0717 13:46:50.814954   55100 logs.go:286] No container was found matching "kube-apiserver"
	I0717 13:46:50.815018   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 13:46:50.834190   55100 logs.go:284] 0 containers: []
	W0717 13:46:50.834203   55100 logs.go:286] No container was found matching "etcd"
	I0717 13:46:50.834272   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 13:46:50.853540   55100 logs.go:284] 0 containers: []
	W0717 13:46:50.853552   55100 logs.go:286] No container was found matching "coredns"
	I0717 13:46:50.853617   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 13:46:50.872552   55100 logs.go:284] 0 containers: []
	W0717 13:46:50.872566   55100 logs.go:286] No container was found matching "kube-scheduler"
	I0717 13:46:50.872644   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 13:46:50.893181   55100 logs.go:284] 0 containers: []
	W0717 13:46:50.893193   55100 logs.go:286] No container was found matching "kube-proxy"
	I0717 13:46:50.893262   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 13:46:50.911344   55100 logs.go:284] 0 containers: []
	W0717 13:46:50.911356   55100 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 13:46:50.911426   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 13:46:50.930944   55100 logs.go:284] 0 containers: []
	W0717 13:46:50.930956   55100 logs.go:286] No container was found matching "kindnet"
	I0717 13:46:50.931024   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 13:46:50.950885   55100 logs.go:284] 0 containers: []
	W0717 13:46:50.950899   55100 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 13:46:50.950906   55100 logs.go:123] Gathering logs for kubelet ...
	I0717 13:46:50.950913   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 13:46:50.990058   55100 logs.go:123] Gathering logs for dmesg ...
	I0717 13:46:50.990072   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 13:46:51.003840   55100 logs.go:123] Gathering logs for describe nodes ...
	I0717 13:46:51.003855   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 13:46:51.059083   55100 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 13:46:51.059095   55100 logs.go:123] Gathering logs for Docker ...
	I0717 13:46:51.059102   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 13:46:51.074401   55100 logs.go:123] Gathering logs for container status ...
	I0717 13:46:51.074416   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 13:46:53.632601   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:46:53.644401   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 13:46:53.663313   55100 logs.go:284] 0 containers: []
	W0717 13:46:53.663325   55100 logs.go:286] No container was found matching "kube-apiserver"
	I0717 13:46:53.663392   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 13:46:53.682061   55100 logs.go:284] 0 containers: []
	W0717 13:46:53.682074   55100 logs.go:286] No container was found matching "etcd"
	I0717 13:46:53.682144   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 13:46:53.700859   55100 logs.go:284] 0 containers: []
	W0717 13:46:53.700873   55100 logs.go:286] No container was found matching "coredns"
	I0717 13:46:53.700950   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 13:46:53.719797   55100 logs.go:284] 0 containers: []
	W0717 13:46:53.719809   55100 logs.go:286] No container was found matching "kube-scheduler"
	I0717 13:46:53.719894   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 13:46:53.739949   55100 logs.go:284] 0 containers: []
	W0717 13:46:53.739960   55100 logs.go:286] No container was found matching "kube-proxy"
	I0717 13:46:53.740028   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 13:46:53.758410   55100 logs.go:284] 0 containers: []
	W0717 13:46:53.758423   55100 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 13:46:53.758492   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 13:46:53.776768   55100 logs.go:284] 0 containers: []
	W0717 13:46:53.776782   55100 logs.go:286] No container was found matching "kindnet"
	I0717 13:46:53.776850   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 13:46:53.796628   55100 logs.go:284] 0 containers: []
	W0717 13:46:53.796640   55100 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 13:46:53.796647   55100 logs.go:123] Gathering logs for kubelet ...
	I0717 13:46:53.796655   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 13:46:53.837266   55100 logs.go:123] Gathering logs for dmesg ...
	I0717 13:46:53.837280   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 13:46:53.851099   55100 logs.go:123] Gathering logs for describe nodes ...
	I0717 13:46:53.851114   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 13:46:53.907019   55100 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 13:46:53.907032   55100 logs.go:123] Gathering logs for Docker ...
	I0717 13:46:53.907042   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 13:46:53.922532   55100 logs.go:123] Gathering logs for container status ...
	I0717 13:46:53.922547   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 13:46:56.475713   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:46:56.488004   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 13:46:56.507404   55100 logs.go:284] 0 containers: []
	W0717 13:46:56.507418   55100 logs.go:286] No container was found matching "kube-apiserver"
	I0717 13:46:56.507489   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 13:46:56.526032   55100 logs.go:284] 0 containers: []
	W0717 13:46:56.526046   55100 logs.go:286] No container was found matching "etcd"
	I0717 13:46:56.526116   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 13:46:56.545355   55100 logs.go:284] 0 containers: []
	W0717 13:46:56.545369   55100 logs.go:286] No container was found matching "coredns"
	I0717 13:46:56.545437   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 13:46:56.564669   55100 logs.go:284] 0 containers: []
	W0717 13:46:56.564682   55100 logs.go:286] No container was found matching "kube-scheduler"
	I0717 13:46:56.564748   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 13:46:56.583394   55100 logs.go:284] 0 containers: []
	W0717 13:46:56.583408   55100 logs.go:286] No container was found matching "kube-proxy"
	I0717 13:46:56.583477   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 13:46:56.602907   55100 logs.go:284] 0 containers: []
	W0717 13:46:56.602920   55100 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 13:46:56.602989   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 13:46:56.621333   55100 logs.go:284] 0 containers: []
	W0717 13:46:56.621346   55100 logs.go:286] No container was found matching "kindnet"
	I0717 13:46:56.621415   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 13:46:56.640391   55100 logs.go:284] 0 containers: []
	W0717 13:46:56.640404   55100 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 13:46:56.640411   55100 logs.go:123] Gathering logs for kubelet ...
	I0717 13:46:56.640420   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 13:46:56.680506   55100 logs.go:123] Gathering logs for dmesg ...
	I0717 13:46:56.680521   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 13:46:56.694403   55100 logs.go:123] Gathering logs for describe nodes ...
	I0717 13:46:56.694417   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 13:46:56.752434   55100 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 13:46:56.752446   55100 logs.go:123] Gathering logs for Docker ...
	I0717 13:46:56.752452   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 13:46:56.767944   55100 logs.go:123] Gathering logs for container status ...
	I0717 13:46:56.767961   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 13:46:59.318252   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:46:59.329327   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 13:46:59.349645   55100 logs.go:284] 0 containers: []
	W0717 13:46:59.349658   55100 logs.go:286] No container was found matching "kube-apiserver"
	I0717 13:46:59.349728   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 13:46:59.368639   55100 logs.go:284] 0 containers: []
	W0717 13:46:59.368654   55100 logs.go:286] No container was found matching "etcd"
	I0717 13:46:59.368734   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 13:46:59.389205   55100 logs.go:284] 0 containers: []
	W0717 13:46:59.389220   55100 logs.go:286] No container was found matching "coredns"
	I0717 13:46:59.389288   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 13:46:59.431811   55100 logs.go:284] 0 containers: []
	W0717 13:46:59.431827   55100 logs.go:286] No container was found matching "kube-scheduler"
	I0717 13:46:59.431906   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 13:46:59.451365   55100 logs.go:284] 0 containers: []
	W0717 13:46:59.451379   55100 logs.go:286] No container was found matching "kube-proxy"
	I0717 13:46:59.451446   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 13:46:59.470875   55100 logs.go:284] 0 containers: []
	W0717 13:46:59.470887   55100 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 13:46:59.470954   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 13:46:59.489798   55100 logs.go:284] 0 containers: []
	W0717 13:46:59.489810   55100 logs.go:286] No container was found matching "kindnet"
	I0717 13:46:59.489884   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 13:46:59.510384   55100 logs.go:284] 0 containers: []
	W0717 13:46:59.510397   55100 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 13:46:59.510404   55100 logs.go:123] Gathering logs for Docker ...
	I0717 13:46:59.510418   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 13:46:59.525730   55100 logs.go:123] Gathering logs for container status ...
	I0717 13:46:59.525744   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 13:46:59.578568   55100 logs.go:123] Gathering logs for kubelet ...
	I0717 13:46:59.578601   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 13:46:59.616292   55100 logs.go:123] Gathering logs for dmesg ...
	I0717 13:46:59.616307   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 13:46:59.630333   55100 logs.go:123] Gathering logs for describe nodes ...
	I0717 13:46:59.630348   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 13:46:59.685921   55100 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 13:47:02.186339   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:47:02.198584   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 13:47:02.218089   55100 logs.go:284] 0 containers: []
	W0717 13:47:02.218102   55100 logs.go:286] No container was found matching "kube-apiserver"
	I0717 13:47:02.218175   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 13:47:02.238035   55100 logs.go:284] 0 containers: []
	W0717 13:47:02.238047   55100 logs.go:286] No container was found matching "etcd"
	I0717 13:47:02.238114   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 13:47:02.257324   55100 logs.go:284] 0 containers: []
	W0717 13:47:02.257337   55100 logs.go:286] No container was found matching "coredns"
	I0717 13:47:02.257407   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 13:47:02.276226   55100 logs.go:284] 0 containers: []
	W0717 13:47:02.276240   55100 logs.go:286] No container was found matching "kube-scheduler"
	I0717 13:47:02.276308   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 13:47:02.294416   55100 logs.go:284] 0 containers: []
	W0717 13:47:02.294429   55100 logs.go:286] No container was found matching "kube-proxy"
	I0717 13:47:02.294498   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 13:47:02.312683   55100 logs.go:284] 0 containers: []
	W0717 13:47:02.312696   55100 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 13:47:02.312764   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 13:47:02.331778   55100 logs.go:284] 0 containers: []
	W0717 13:47:02.331792   55100 logs.go:286] No container was found matching "kindnet"
	I0717 13:47:02.331871   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 13:47:02.352579   55100 logs.go:284] 0 containers: []
	W0717 13:47:02.352594   55100 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 13:47:02.352605   55100 logs.go:123] Gathering logs for Docker ...
	I0717 13:47:02.352620   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 13:47:02.368586   55100 logs.go:123] Gathering logs for container status ...
	I0717 13:47:02.368600   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 13:47:02.448770   55100 logs.go:123] Gathering logs for kubelet ...
	I0717 13:47:02.448786   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 13:47:02.488527   55100 logs.go:123] Gathering logs for dmesg ...
	I0717 13:47:02.488542   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 13:47:02.502114   55100 logs.go:123] Gathering logs for describe nodes ...
	I0717 13:47:02.502132   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 13:47:02.558392   55100 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 13:47:05.058905   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:47:05.071919   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 13:47:05.092582   55100 logs.go:284] 0 containers: []
	W0717 13:47:05.092596   55100 logs.go:286] No container was found matching "kube-apiserver"
	I0717 13:47:05.092662   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 13:47:05.112186   55100 logs.go:284] 0 containers: []
	W0717 13:47:05.112198   55100 logs.go:286] No container was found matching "etcd"
	I0717 13:47:05.112258   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 13:47:05.131531   55100 logs.go:284] 0 containers: []
	W0717 13:47:05.131544   55100 logs.go:286] No container was found matching "coredns"
	I0717 13:47:05.131613   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 13:47:05.151709   55100 logs.go:284] 0 containers: []
	W0717 13:47:05.151723   55100 logs.go:286] No container was found matching "kube-scheduler"
	I0717 13:47:05.151796   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 13:47:05.170339   55100 logs.go:284] 0 containers: []
	W0717 13:47:05.170352   55100 logs.go:286] No container was found matching "kube-proxy"
	I0717 13:47:05.170421   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 13:47:05.189080   55100 logs.go:284] 0 containers: []
	W0717 13:47:05.189093   55100 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 13:47:05.189161   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 13:47:05.207970   55100 logs.go:284] 0 containers: []
	W0717 13:47:05.207984   55100 logs.go:286] No container was found matching "kindnet"
	I0717 13:47:05.208059   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 13:47:05.228166   55100 logs.go:284] 0 containers: []
	W0717 13:47:05.228179   55100 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 13:47:05.228187   55100 logs.go:123] Gathering logs for describe nodes ...
	I0717 13:47:05.228197   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 13:47:05.283322   55100 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 13:47:05.283335   55100 logs.go:123] Gathering logs for Docker ...
	I0717 13:47:05.283343   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 13:47:05.298351   55100 logs.go:123] Gathering logs for container status ...
	I0717 13:47:05.298364   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 13:47:05.351149   55100 logs.go:123] Gathering logs for kubelet ...
	I0717 13:47:05.351162   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 13:47:05.391898   55100 logs.go:123] Gathering logs for dmesg ...
	I0717 13:47:05.391914   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 13:47:07.921984   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:47:07.934568   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 13:47:07.953397   55100 logs.go:284] 0 containers: []
	W0717 13:47:07.953410   55100 logs.go:286] No container was found matching "kube-apiserver"
	I0717 13:47:07.953479   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 13:47:07.971818   55100 logs.go:284] 0 containers: []
	W0717 13:47:07.971835   55100 logs.go:286] No container was found matching "etcd"
	I0717 13:47:07.971911   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 13:47:07.990904   55100 logs.go:284] 0 containers: []
	W0717 13:47:07.990918   55100 logs.go:286] No container was found matching "coredns"
	I0717 13:47:07.990987   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 13:47:08.010644   55100 logs.go:284] 0 containers: []
	W0717 13:47:08.010658   55100 logs.go:286] No container was found matching "kube-scheduler"
	I0717 13:47:08.010729   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 13:47:08.029111   55100 logs.go:284] 0 containers: []
	W0717 13:47:08.029124   55100 logs.go:286] No container was found matching "kube-proxy"
	I0717 13:47:08.029192   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 13:47:08.047309   55100 logs.go:284] 0 containers: []
	W0717 13:47:08.047322   55100 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 13:47:08.047390   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 13:47:08.066438   55100 logs.go:284] 0 containers: []
	W0717 13:47:08.066451   55100 logs.go:286] No container was found matching "kindnet"
	I0717 13:47:08.066517   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 13:47:08.085170   55100 logs.go:284] 0 containers: []
	W0717 13:47:08.085183   55100 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 13:47:08.085190   55100 logs.go:123] Gathering logs for kubelet ...
	I0717 13:47:08.085196   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 13:47:08.124262   55100 logs.go:123] Gathering logs for dmesg ...
	I0717 13:47:08.124277   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 13:47:08.138215   55100 logs.go:123] Gathering logs for describe nodes ...
	I0717 13:47:08.138230   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 13:47:08.193927   55100 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 13:47:08.193941   55100 logs.go:123] Gathering logs for Docker ...
	I0717 13:47:08.193948   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 13:47:08.209306   55100 logs.go:123] Gathering logs for container status ...
	I0717 13:47:08.209319   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 13:47:10.763686   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:47:10.775925   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 13:47:10.795275   55100 logs.go:284] 0 containers: []
	W0717 13:47:10.795287   55100 logs.go:286] No container was found matching "kube-apiserver"
	I0717 13:47:10.795353   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 13:47:10.815531   55100 logs.go:284] 0 containers: []
	W0717 13:47:10.815543   55100 logs.go:286] No container was found matching "etcd"
	I0717 13:47:10.815609   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 13:47:10.834289   55100 logs.go:284] 0 containers: []
	W0717 13:47:10.834304   55100 logs.go:286] No container was found matching "coredns"
	I0717 13:47:10.834371   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 13:47:10.853799   55100 logs.go:284] 0 containers: []
	W0717 13:47:10.853816   55100 logs.go:286] No container was found matching "kube-scheduler"
	I0717 13:47:10.853888   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 13:47:10.872943   55100 logs.go:284] 0 containers: []
	W0717 13:47:10.872957   55100 logs.go:286] No container was found matching "kube-proxy"
	I0717 13:47:10.873025   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 13:47:10.891699   55100 logs.go:284] 0 containers: []
	W0717 13:47:10.891711   55100 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 13:47:10.891778   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 13:47:10.910894   55100 logs.go:284] 0 containers: []
	W0717 13:47:10.910910   55100 logs.go:286] No container was found matching "kindnet"
	I0717 13:47:10.910977   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 13:47:10.929696   55100 logs.go:284] 0 containers: []
	W0717 13:47:10.929708   55100 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 13:47:10.929715   55100 logs.go:123] Gathering logs for Docker ...
	I0717 13:47:10.929722   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 13:47:10.945281   55100 logs.go:123] Gathering logs for container status ...
	I0717 13:47:10.945295   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 13:47:10.996068   55100 logs.go:123] Gathering logs for kubelet ...
	I0717 13:47:10.996082   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 13:47:11.033766   55100 logs.go:123] Gathering logs for dmesg ...
	I0717 13:47:11.033779   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 13:47:11.047305   55100 logs.go:123] Gathering logs for describe nodes ...
	I0717 13:47:11.047318   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 13:47:11.101637   55100 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 13:47:13.601796   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:47:13.612718   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 13:47:13.632067   55100 logs.go:284] 0 containers: []
	W0717 13:47:13.632082   55100 logs.go:286] No container was found matching "kube-apiserver"
	I0717 13:47:13.632153   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 13:47:13.651331   55100 logs.go:284] 0 containers: []
	W0717 13:47:13.651343   55100 logs.go:286] No container was found matching "etcd"
	I0717 13:47:13.651418   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 13:47:13.670704   55100 logs.go:284] 0 containers: []
	W0717 13:47:13.670718   55100 logs.go:286] No container was found matching "coredns"
	I0717 13:47:13.670785   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 13:47:13.690983   55100 logs.go:284] 0 containers: []
	W0717 13:47:13.690996   55100 logs.go:286] No container was found matching "kube-scheduler"
	I0717 13:47:13.691069   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 13:47:13.709168   55100 logs.go:284] 0 containers: []
	W0717 13:47:13.709181   55100 logs.go:286] No container was found matching "kube-proxy"
	I0717 13:47:13.709250   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 13:47:13.727577   55100 logs.go:284] 0 containers: []
	W0717 13:47:13.727589   55100 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 13:47:13.727657   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 13:47:13.746154   55100 logs.go:284] 0 containers: []
	W0717 13:47:13.746167   55100 logs.go:286] No container was found matching "kindnet"
	I0717 13:47:13.746234   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 13:47:13.765361   55100 logs.go:284] 0 containers: []
	W0717 13:47:13.765374   55100 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 13:47:13.765380   55100 logs.go:123] Gathering logs for kubelet ...
	I0717 13:47:13.765388   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 13:47:13.803460   55100 logs.go:123] Gathering logs for dmesg ...
	I0717 13:47:13.803494   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 13:47:13.817359   55100 logs.go:123] Gathering logs for describe nodes ...
	I0717 13:47:13.817375   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 13:47:13.873383   55100 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 13:47:13.873400   55100 logs.go:123] Gathering logs for Docker ...
	I0717 13:47:13.873407   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 13:47:13.888915   55100 logs.go:123] Gathering logs for container status ...
	I0717 13:47:13.888930   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 13:47:16.444217   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:47:16.456612   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 13:47:16.476232   55100 logs.go:284] 0 containers: []
	W0717 13:47:16.476245   55100 logs.go:286] No container was found matching "kube-apiserver"
	I0717 13:47:16.476313   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 13:47:16.496486   55100 logs.go:284] 0 containers: []
	W0717 13:47:16.496499   55100 logs.go:286] No container was found matching "etcd"
	I0717 13:47:16.496566   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 13:47:16.516342   55100 logs.go:284] 0 containers: []
	W0717 13:47:16.516356   55100 logs.go:286] No container was found matching "coredns"
	I0717 13:47:16.516426   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 13:47:16.535211   55100 logs.go:284] 0 containers: []
	W0717 13:47:16.535224   55100 logs.go:286] No container was found matching "kube-scheduler"
	I0717 13:47:16.535292   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 13:47:16.554050   55100 logs.go:284] 0 containers: []
	W0717 13:47:16.554062   55100 logs.go:286] No container was found matching "kube-proxy"
	I0717 13:47:16.554127   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 13:47:16.575006   55100 logs.go:284] 0 containers: []
	W0717 13:47:16.575020   55100 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 13:47:16.575092   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 13:47:16.594029   55100 logs.go:284] 0 containers: []
	W0717 13:47:16.594045   55100 logs.go:286] No container was found matching "kindnet"
	I0717 13:47:16.594122   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 13:47:16.620392   55100 logs.go:284] 0 containers: []
	W0717 13:47:16.620406   55100 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 13:47:16.620413   55100 logs.go:123] Gathering logs for dmesg ...
	I0717 13:47:16.620421   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 13:47:16.635160   55100 logs.go:123] Gathering logs for describe nodes ...
	I0717 13:47:16.635176   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 13:47:16.690066   55100 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 13:47:16.690078   55100 logs.go:123] Gathering logs for Docker ...
	I0717 13:47:16.690086   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 13:47:16.705521   55100 logs.go:123] Gathering logs for container status ...
	I0717 13:47:16.705535   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 13:47:16.756981   55100 logs.go:123] Gathering logs for kubelet ...
	I0717 13:47:16.756995   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 13:47:19.296251   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:47:19.308319   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 13:47:19.327166   55100 logs.go:284] 0 containers: []
	W0717 13:47:19.327180   55100 logs.go:286] No container was found matching "kube-apiserver"
	I0717 13:47:19.327256   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 13:47:19.347315   55100 logs.go:284] 0 containers: []
	W0717 13:47:19.347328   55100 logs.go:286] No container was found matching "etcd"
	I0717 13:47:19.347399   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 13:47:19.367067   55100 logs.go:284] 0 containers: []
	W0717 13:47:19.367080   55100 logs.go:286] No container was found matching "coredns"
	I0717 13:47:19.367157   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 13:47:19.385788   55100 logs.go:284] 0 containers: []
	W0717 13:47:19.385808   55100 logs.go:286] No container was found matching "kube-scheduler"
	I0717 13:47:19.385895   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 13:47:19.406088   55100 logs.go:284] 0 containers: []
	W0717 13:47:19.406100   55100 logs.go:286] No container was found matching "kube-proxy"
	I0717 13:47:19.406170   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 13:47:19.426149   55100 logs.go:284] 0 containers: []
	W0717 13:47:19.426162   55100 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 13:47:19.426230   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 13:47:19.445038   55100 logs.go:284] 0 containers: []
	W0717 13:47:19.445052   55100 logs.go:286] No container was found matching "kindnet"
	I0717 13:47:19.445123   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 13:47:19.465191   55100 logs.go:284] 0 containers: []
	W0717 13:47:19.465204   55100 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 13:47:19.465210   55100 logs.go:123] Gathering logs for kubelet ...
	I0717 13:47:19.465218   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 13:47:19.503504   55100 logs.go:123] Gathering logs for dmesg ...
	I0717 13:47:19.503518   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 13:47:19.517847   55100 logs.go:123] Gathering logs for describe nodes ...
	I0717 13:47:19.517861   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 13:47:19.573393   55100 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 13:47:19.573405   55100 logs.go:123] Gathering logs for Docker ...
	I0717 13:47:19.573412   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 13:47:19.591562   55100 logs.go:123] Gathering logs for container status ...
	I0717 13:47:19.591580   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 13:47:22.148863   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:47:22.161400   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 13:47:22.180619   55100 logs.go:284] 0 containers: []
	W0717 13:47:22.180633   55100 logs.go:286] No container was found matching "kube-apiserver"
	I0717 13:47:22.180706   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 13:47:22.199346   55100 logs.go:284] 0 containers: []
	W0717 13:47:22.199361   55100 logs.go:286] No container was found matching "etcd"
	I0717 13:47:22.199437   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 13:47:22.218367   55100 logs.go:284] 0 containers: []
	W0717 13:47:22.218381   55100 logs.go:286] No container was found matching "coredns"
	I0717 13:47:22.218447   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 13:47:22.237539   55100 logs.go:284] 0 containers: []
	W0717 13:47:22.237553   55100 logs.go:286] No container was found matching "kube-scheduler"
	I0717 13:47:22.237624   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 13:47:22.255912   55100 logs.go:284] 0 containers: []
	W0717 13:47:22.255925   55100 logs.go:286] No container was found matching "kube-proxy"
	I0717 13:47:22.255992   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 13:47:22.276039   55100 logs.go:284] 0 containers: []
	W0717 13:47:22.276052   55100 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 13:47:22.276122   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 13:47:22.295376   55100 logs.go:284] 0 containers: []
	W0717 13:47:22.295389   55100 logs.go:286] No container was found matching "kindnet"
	I0717 13:47:22.295459   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 13:47:22.314632   55100 logs.go:284] 0 containers: []
	W0717 13:47:22.314644   55100 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 13:47:22.314651   55100 logs.go:123] Gathering logs for kubelet ...
	I0717 13:47:22.314657   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 13:47:22.353334   55100 logs.go:123] Gathering logs for dmesg ...
	I0717 13:47:22.353348   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 13:47:22.366919   55100 logs.go:123] Gathering logs for describe nodes ...
	I0717 13:47:22.366933   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 13:47:22.422708   55100 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 13:47:22.422722   55100 logs.go:123] Gathering logs for Docker ...
	I0717 13:47:22.422728   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 13:47:22.437853   55100 logs.go:123] Gathering logs for container status ...
	I0717 13:47:22.437866   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 13:47:24.989581   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:47:25.001542   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 13:47:25.021537   55100 logs.go:284] 0 containers: []
	W0717 13:47:25.021560   55100 logs.go:286] No container was found matching "kube-apiserver"
	I0717 13:47:25.021639   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 13:47:25.042600   55100 logs.go:284] 0 containers: []
	W0717 13:47:25.042613   55100 logs.go:286] No container was found matching "etcd"
	I0717 13:47:25.042685   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 13:47:25.062558   55100 logs.go:284] 0 containers: []
	W0717 13:47:25.062570   55100 logs.go:286] No container was found matching "coredns"
	I0717 13:47:25.062640   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 13:47:25.082697   55100 logs.go:284] 0 containers: []
	W0717 13:47:25.082710   55100 logs.go:286] No container was found matching "kube-scheduler"
	I0717 13:47:25.082777   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 13:47:25.103082   55100 logs.go:284] 0 containers: []
	W0717 13:47:25.103096   55100 logs.go:286] No container was found matching "kube-proxy"
	I0717 13:47:25.103167   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 13:47:25.123181   55100 logs.go:284] 0 containers: []
	W0717 13:47:25.123195   55100 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 13:47:25.123264   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 13:47:25.142316   55100 logs.go:284] 0 containers: []
	W0717 13:47:25.142328   55100 logs.go:286] No container was found matching "kindnet"
	I0717 13:47:25.142395   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 13:47:25.161215   55100 logs.go:284] 0 containers: []
	W0717 13:47:25.161228   55100 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 13:47:25.161235   55100 logs.go:123] Gathering logs for kubelet ...
	I0717 13:47:25.161242   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 13:47:25.201753   55100 logs.go:123] Gathering logs for dmesg ...
	I0717 13:47:25.201767   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 13:47:25.215525   55100 logs.go:123] Gathering logs for describe nodes ...
	I0717 13:47:25.215546   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 13:47:25.272017   55100 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 13:47:25.272030   55100 logs.go:123] Gathering logs for Docker ...
	I0717 13:47:25.272037   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 13:47:25.287328   55100 logs.go:123] Gathering logs for container status ...
	I0717 13:47:25.287341   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 13:47:27.836677   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:47:27.847540   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 13:47:27.866716   55100 logs.go:284] 0 containers: []
	W0717 13:47:27.866730   55100 logs.go:286] No container was found matching "kube-apiserver"
	I0717 13:47:27.866809   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 13:47:27.885457   55100 logs.go:284] 0 containers: []
	W0717 13:47:27.885470   55100 logs.go:286] No container was found matching "etcd"
	I0717 13:47:27.885539   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 13:47:27.922785   55100 logs.go:284] 0 containers: []
	W0717 13:47:27.922800   55100 logs.go:286] No container was found matching "coredns"
	I0717 13:47:27.922882   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 13:47:27.942980   55100 logs.go:284] 0 containers: []
	W0717 13:47:27.942993   55100 logs.go:286] No container was found matching "kube-scheduler"
	I0717 13:47:27.943062   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 13:47:27.963494   55100 logs.go:284] 0 containers: []
	W0717 13:47:27.963506   55100 logs.go:286] No container was found matching "kube-proxy"
	I0717 13:47:27.963574   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 13:47:27.982995   55100 logs.go:284] 0 containers: []
	W0717 13:47:27.983009   55100 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 13:47:27.983078   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 13:47:28.002153   55100 logs.go:284] 0 containers: []
	W0717 13:47:28.002166   55100 logs.go:286] No container was found matching "kindnet"
	I0717 13:47:28.002235   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 13:47:28.020616   55100 logs.go:284] 0 containers: []
	W0717 13:47:28.020629   55100 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 13:47:28.020644   55100 logs.go:123] Gathering logs for kubelet ...
	I0717 13:47:28.020652   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 13:47:28.060325   55100 logs.go:123] Gathering logs for dmesg ...
	I0717 13:47:28.060340   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 13:47:28.074172   55100 logs.go:123] Gathering logs for describe nodes ...
	I0717 13:47:28.074186   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 13:47:28.130519   55100 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 13:47:28.130531   55100 logs.go:123] Gathering logs for Docker ...
	I0717 13:47:28.130537   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 13:47:28.146358   55100 logs.go:123] Gathering logs for container status ...
	I0717 13:47:28.146373   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 13:47:30.697853   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:47:30.708125   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 13:47:30.727388   55100 logs.go:284] 0 containers: []
	W0717 13:47:30.727401   55100 logs.go:286] No container was found matching "kube-apiserver"
	I0717 13:47:30.727469   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 13:47:30.746427   55100 logs.go:284] 0 containers: []
	W0717 13:47:30.746441   55100 logs.go:286] No container was found matching "etcd"
	I0717 13:47:30.746507   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 13:47:30.765156   55100 logs.go:284] 0 containers: []
	W0717 13:47:30.765169   55100 logs.go:286] No container was found matching "coredns"
	I0717 13:47:30.765237   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 13:47:30.785503   55100 logs.go:284] 0 containers: []
	W0717 13:47:30.785516   55100 logs.go:286] No container was found matching "kube-scheduler"
	I0717 13:47:30.785586   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 13:47:30.804761   55100 logs.go:284] 0 containers: []
	W0717 13:47:30.804775   55100 logs.go:286] No container was found matching "kube-proxy"
	I0717 13:47:30.804857   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 13:47:30.825573   55100 logs.go:284] 0 containers: []
	W0717 13:47:30.825588   55100 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 13:47:30.825659   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 13:47:30.844650   55100 logs.go:284] 0 containers: []
	W0717 13:47:30.844665   55100 logs.go:286] No container was found matching "kindnet"
	I0717 13:47:30.844741   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 13:47:30.865441   55100 logs.go:284] 0 containers: []
	W0717 13:47:30.865456   55100 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 13:47:30.865463   55100 logs.go:123] Gathering logs for dmesg ...
	I0717 13:47:30.865470   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 13:47:30.879344   55100 logs.go:123] Gathering logs for describe nodes ...
	I0717 13:47:30.879359   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 13:47:30.965310   55100 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 13:47:30.965325   55100 logs.go:123] Gathering logs for Docker ...
	I0717 13:47:30.965332   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 13:47:30.980915   55100 logs.go:123] Gathering logs for container status ...
	I0717 13:47:30.980928   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 13:47:31.031140   55100 logs.go:123] Gathering logs for kubelet ...
	I0717 13:47:31.031155   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 13:47:33.572773   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:47:33.585436   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 13:47:33.607653   55100 logs.go:284] 0 containers: []
	W0717 13:47:33.607666   55100 logs.go:286] No container was found matching "kube-apiserver"
	I0717 13:47:33.607735   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 13:47:33.627728   55100 logs.go:284] 0 containers: []
	W0717 13:47:33.627740   55100 logs.go:286] No container was found matching "etcd"
	I0717 13:47:33.627826   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 13:47:33.647218   55100 logs.go:284] 0 containers: []
	W0717 13:47:33.647231   55100 logs.go:286] No container was found matching "coredns"
	I0717 13:47:33.647297   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 13:47:33.666597   55100 logs.go:284] 0 containers: []
	W0717 13:47:33.666611   55100 logs.go:286] No container was found matching "kube-scheduler"
	I0717 13:47:33.666680   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 13:47:33.686422   55100 logs.go:284] 0 containers: []
	W0717 13:47:33.686436   55100 logs.go:286] No container was found matching "kube-proxy"
	I0717 13:47:33.686508   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 13:47:33.705892   55100 logs.go:284] 0 containers: []
	W0717 13:47:33.705906   55100 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 13:47:33.705971   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 13:47:33.724909   55100 logs.go:284] 0 containers: []
	W0717 13:47:33.724923   55100 logs.go:286] No container was found matching "kindnet"
	I0717 13:47:33.724991   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 13:47:33.743360   55100 logs.go:284] 0 containers: []
	W0717 13:47:33.743374   55100 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 13:47:33.743381   55100 logs.go:123] Gathering logs for dmesg ...
	I0717 13:47:33.743387   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 13:47:33.756773   55100 logs.go:123] Gathering logs for describe nodes ...
	I0717 13:47:33.756791   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 13:47:33.815777   55100 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 13:47:33.815795   55100 logs.go:123] Gathering logs for Docker ...
	I0717 13:47:33.815804   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 13:47:33.831711   55100 logs.go:123] Gathering logs for container status ...
	I0717 13:47:33.831726   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 13:47:33.881236   55100 logs.go:123] Gathering logs for kubelet ...
	I0717 13:47:33.881250   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 13:47:36.454919   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:47:36.467222   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 13:47:36.487204   55100 logs.go:284] 0 containers: []
	W0717 13:47:36.487217   55100 logs.go:286] No container was found matching "kube-apiserver"
	I0717 13:47:36.487283   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 13:47:36.506959   55100 logs.go:284] 0 containers: []
	W0717 13:47:36.506972   55100 logs.go:286] No container was found matching "etcd"
	I0717 13:47:36.507042   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 13:47:36.526848   55100 logs.go:284] 0 containers: []
	W0717 13:47:36.526863   55100 logs.go:286] No container was found matching "coredns"
	I0717 13:47:36.526930   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 13:47:36.545987   55100 logs.go:284] 0 containers: []
	W0717 13:47:36.546002   55100 logs.go:286] No container was found matching "kube-scheduler"
	I0717 13:47:36.546072   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 13:47:36.563988   55100 logs.go:284] 0 containers: []
	W0717 13:47:36.564001   55100 logs.go:286] No container was found matching "kube-proxy"
	I0717 13:47:36.564068   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 13:47:36.584342   55100 logs.go:284] 0 containers: []
	W0717 13:47:36.584355   55100 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 13:47:36.584424   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 13:47:36.603581   55100 logs.go:284] 0 containers: []
	W0717 13:47:36.603595   55100 logs.go:286] No container was found matching "kindnet"
	I0717 13:47:36.603663   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 13:47:36.622318   55100 logs.go:284] 0 containers: []
	W0717 13:47:36.622332   55100 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 13:47:36.622347   55100 logs.go:123] Gathering logs for describe nodes ...
	I0717 13:47:36.622354   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 13:47:36.676988   55100 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 13:47:36.677003   55100 logs.go:123] Gathering logs for Docker ...
	I0717 13:47:36.677009   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 13:47:36.692396   55100 logs.go:123] Gathering logs for container status ...
	I0717 13:47:36.692411   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 13:47:36.743621   55100 logs.go:123] Gathering logs for kubelet ...
	I0717 13:47:36.743636   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 13:47:36.783118   55100 logs.go:123] Gathering logs for dmesg ...
	I0717 13:47:36.783133   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 13:47:39.298234   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:47:39.310301   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 13:47:39.330761   55100 logs.go:284] 0 containers: []
	W0717 13:47:39.330777   55100 logs.go:286] No container was found matching "kube-apiserver"
	I0717 13:47:39.330894   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 13:47:39.357868   55100 logs.go:284] 0 containers: []
	W0717 13:47:39.357881   55100 logs.go:286] No container was found matching "etcd"
	I0717 13:47:39.357945   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 13:47:39.377341   55100 logs.go:284] 0 containers: []
	W0717 13:47:39.377354   55100 logs.go:286] No container was found matching "coredns"
	I0717 13:47:39.377424   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 13:47:39.398362   55100 logs.go:284] 0 containers: []
	W0717 13:47:39.398376   55100 logs.go:286] No container was found matching "kube-scheduler"
	I0717 13:47:39.398443   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 13:47:39.417859   55100 logs.go:284] 0 containers: []
	W0717 13:47:39.417873   55100 logs.go:286] No container was found matching "kube-proxy"
	I0717 13:47:39.417942   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 13:47:39.436550   55100 logs.go:284] 0 containers: []
	W0717 13:47:39.436563   55100 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 13:47:39.436632   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 13:47:39.456439   55100 logs.go:284] 0 containers: []
	W0717 13:47:39.456452   55100 logs.go:286] No container was found matching "kindnet"
	I0717 13:47:39.456520   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 13:47:39.476275   55100 logs.go:284] 0 containers: []
	W0717 13:47:39.476287   55100 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 13:47:39.476295   55100 logs.go:123] Gathering logs for container status ...
	I0717 13:47:39.476302   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 13:47:39.526849   55100 logs.go:123] Gathering logs for kubelet ...
	I0717 13:47:39.526864   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 13:47:39.565598   55100 logs.go:123] Gathering logs for dmesg ...
	I0717 13:47:39.565612   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 13:47:39.579269   55100 logs.go:123] Gathering logs for describe nodes ...
	I0717 13:47:39.579282   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 13:47:39.634379   55100 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 13:47:39.634391   55100 logs.go:123] Gathering logs for Docker ...
	I0717 13:47:39.634397   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 13:47:42.150038   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:47:42.160474   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 13:47:42.179523   55100 logs.go:284] 0 containers: []
	W0717 13:47:42.179536   55100 logs.go:286] No container was found matching "kube-apiserver"
	I0717 13:47:42.179605   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 13:47:42.198920   55100 logs.go:284] 0 containers: []
	W0717 13:47:42.198933   55100 logs.go:286] No container was found matching "etcd"
	I0717 13:47:42.199001   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 13:47:42.218822   55100 logs.go:284] 0 containers: []
	W0717 13:47:42.218835   55100 logs.go:286] No container was found matching "coredns"
	I0717 13:47:42.218901   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 13:47:42.237752   55100 logs.go:284] 0 containers: []
	W0717 13:47:42.237775   55100 logs.go:286] No container was found matching "kube-scheduler"
	I0717 13:47:42.237851   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 13:47:42.256468   55100 logs.go:284] 0 containers: []
	W0717 13:47:42.256480   55100 logs.go:286] No container was found matching "kube-proxy"
	I0717 13:47:42.256550   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 13:47:42.275686   55100 logs.go:284] 0 containers: []
	W0717 13:47:42.275699   55100 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 13:47:42.275781   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 13:47:42.295412   55100 logs.go:284] 0 containers: []
	W0717 13:47:42.295425   55100 logs.go:286] No container was found matching "kindnet"
	I0717 13:47:42.295492   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 13:47:42.314287   55100 logs.go:284] 0 containers: []
	W0717 13:47:42.314301   55100 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 13:47:42.314308   55100 logs.go:123] Gathering logs for kubelet ...
	I0717 13:47:42.314315   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 13:47:42.352790   55100 logs.go:123] Gathering logs for dmesg ...
	I0717 13:47:42.352808   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 13:47:42.366656   55100 logs.go:123] Gathering logs for describe nodes ...
	I0717 13:47:42.366671   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 13:47:42.421856   55100 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 13:47:42.421869   55100 logs.go:123] Gathering logs for Docker ...
	I0717 13:47:42.421877   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 13:47:42.437205   55100 logs.go:123] Gathering logs for container status ...
	I0717 13:47:42.437218   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 13:47:44.988851   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:47:45.001040   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 13:47:45.020265   55100 logs.go:284] 0 containers: []
	W0717 13:47:45.020279   55100 logs.go:286] No container was found matching "kube-apiserver"
	I0717 13:47:45.020345   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 13:47:45.039574   55100 logs.go:284] 0 containers: []
	W0717 13:47:45.039585   55100 logs.go:286] No container was found matching "etcd"
	I0717 13:47:45.039654   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 13:47:45.059543   55100 logs.go:284] 0 containers: []
	W0717 13:47:45.059555   55100 logs.go:286] No container was found matching "coredns"
	I0717 13:47:45.059627   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 13:47:45.079185   55100 logs.go:284] 0 containers: []
	W0717 13:47:45.079201   55100 logs.go:286] No container was found matching "kube-scheduler"
	I0717 13:47:45.079272   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 13:47:45.099336   55100 logs.go:284] 0 containers: []
	W0717 13:47:45.099350   55100 logs.go:286] No container was found matching "kube-proxy"
	I0717 13:47:45.099418   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 13:47:45.129428   55100 logs.go:284] 0 containers: []
	W0717 13:47:45.129442   55100 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 13:47:45.129510   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 13:47:45.148801   55100 logs.go:284] 0 containers: []
	W0717 13:47:45.148814   55100 logs.go:286] No container was found matching "kindnet"
	I0717 13:47:45.148883   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 13:47:45.169017   55100 logs.go:284] 0 containers: []
	W0717 13:47:45.169030   55100 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 13:47:45.169036   55100 logs.go:123] Gathering logs for kubelet ...
	I0717 13:47:45.169044   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 13:47:45.209701   55100 logs.go:123] Gathering logs for dmesg ...
	I0717 13:47:45.209717   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 13:47:45.223643   55100 logs.go:123] Gathering logs for describe nodes ...
	I0717 13:47:45.223659   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 13:47:45.280660   55100 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 13:47:45.280672   55100 logs.go:123] Gathering logs for Docker ...
	I0717 13:47:45.280681   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 13:47:45.296216   55100 logs.go:123] Gathering logs for container status ...
	I0717 13:47:45.296229   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 13:47:47.847866   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:47:47.858365   55100 kubeadm.go:640] restartCluster took 4m11.990420193s
	W0717 13:47:47.858405   55100 out.go:239] ! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	I0717 13:47:47.858439   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0717 13:47:48.273989   55100 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 13:47:48.285192   55100 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 13:47:48.294418   55100 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0717 13:47:48.294469   55100 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 13:47:48.303704   55100 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 13:47:48.303730   55100 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0717 13:47:48.354441   55100 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0717 13:47:48.354478   55100 kubeadm.go:322] [preflight] Running pre-flight checks
	I0717 13:47:48.602759   55100 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 13:47:48.602897   55100 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 13:47:48.602980   55100 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 13:47:48.781069   55100 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 13:47:48.781848   55100 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 13:47:48.788562   55100 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0717 13:47:48.859643   55100 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 13:47:48.901886   55100 out.go:204]   - Generating certificates and keys ...
	I0717 13:47:48.901966   55100 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0717 13:47:48.902058   55100 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0717 13:47:48.902134   55100 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0717 13:47:48.902227   55100 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0717 13:47:48.902287   55100 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0717 13:47:48.902340   55100 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0717 13:47:48.902414   55100 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0717 13:47:48.902469   55100 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0717 13:47:48.902534   55100 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0717 13:47:48.902618   55100 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0717 13:47:48.902649   55100 kubeadm.go:322] [certs] Using the existing "sa" key
	I0717 13:47:48.902716   55100 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 13:47:49.025416   55100 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 13:47:49.111913   55100 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 13:47:49.208454   55100 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 13:47:49.404382   55100 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 13:47:49.404829   55100 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 13:47:49.447284   55100 out.go:204]   - Booting up control plane ...
	I0717 13:47:49.447457   55100 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 13:47:49.447616   55100 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 13:47:49.447747   55100 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 13:47:49.447905   55100 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 13:47:49.448143   55100 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0717 13:48:29.413910   55100 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0717 13:48:29.414781   55100 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 13:48:29.415002   55100 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 13:48:34.416636   55100 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 13:48:34.416832   55100 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 13:48:44.417390   55100 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 13:48:44.417529   55100 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 13:49:04.419383   55100 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 13:49:04.419610   55100 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 13:49:44.421349   55100 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 13:49:44.421613   55100 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 13:49:44.421634   55100 kubeadm.go:322] 
	I0717 13:49:44.421682   55100 kubeadm.go:322] Unfortunately, an error has occurred:
	I0717 13:49:44.421723   55100 kubeadm.go:322] 	timed out waiting for the condition
	I0717 13:49:44.421729   55100 kubeadm.go:322] 
	I0717 13:49:44.421765   55100 kubeadm.go:322] This error is likely caused by:
	I0717 13:49:44.421814   55100 kubeadm.go:322] 	- The kubelet is not running
	I0717 13:49:44.421987   55100 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0717 13:49:44.422000   55100 kubeadm.go:322] 
	I0717 13:49:44.422128   55100 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0717 13:49:44.422203   55100 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0717 13:49:44.422254   55100 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0717 13:49:44.422269   55100 kubeadm.go:322] 
	I0717 13:49:44.422384   55100 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0717 13:49:44.422500   55100 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0717 13:49:44.422639   55100 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0717 13:49:44.422692   55100 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0717 13:49:44.422782   55100 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0717 13:49:44.422821   55100 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0717 13:49:44.424576   55100 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0717 13:49:44.424650   55100 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0717 13:49:44.424757   55100 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.4. Latest validated version: 18.09
	I0717 13:49:44.424843   55100 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 13:49:44.424913   55100 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0717 13:49:44.424974   55100 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	W0717 13:49:44.425034   55100 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.4. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.4. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0717 13:49:44.425064   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0717 13:49:44.837728   55100 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 13:49:44.849035   55100 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0717 13:49:44.849093   55100 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 13:49:44.857988   55100 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 13:49:44.858010   55100 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0717 13:49:44.910224   55100 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0717 13:49:44.910272   55100 kubeadm.go:322] [preflight] Running pre-flight checks
	I0717 13:49:45.153838   55100 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 13:49:45.153923   55100 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 13:49:45.153994   55100 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 13:49:45.335570   55100 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 13:49:45.336206   55100 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 13:49:45.342868   55100 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0717 13:49:45.409755   55100 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 13:49:45.431159   55100 out.go:204]   - Generating certificates and keys ...
	I0717 13:49:45.431228   55100 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0717 13:49:45.431308   55100 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0717 13:49:45.431407   55100 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0717 13:49:45.431475   55100 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0717 13:49:45.431558   55100 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0717 13:49:45.431631   55100 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0717 13:49:45.431698   55100 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0717 13:49:45.431776   55100 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0717 13:49:45.431835   55100 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0717 13:49:45.431911   55100 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0717 13:49:45.431967   55100 kubeadm.go:322] [certs] Using the existing "sa" key
	I0717 13:49:45.432031   55100 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 13:49:45.692593   55100 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 13:49:45.867599   55100 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 13:49:46.013236   55100 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 13:49:46.147579   55100 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 13:49:46.148127   55100 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 13:49:46.169678   55100 out.go:204]   - Booting up control plane ...
	I0717 13:49:46.169847   55100 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 13:49:46.170021   55100 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 13:49:46.170217   55100 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 13:49:46.170453   55100 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 13:49:46.170776   55100 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0717 13:50:26.155927   55100 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0717 13:50:26.156336   55100 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 13:50:26.156521   55100 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 13:50:31.158086   55100 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 13:50:31.158320   55100 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 13:50:41.158610   55100 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 13:50:41.158758   55100 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 13:51:01.160469   55100 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 13:51:01.160680   55100 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 13:51:41.162096   55100 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 13:51:41.162311   55100 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 13:51:41.162327   55100 kubeadm.go:322] 
	I0717 13:51:41.162378   55100 kubeadm.go:322] Unfortunately, an error has occurred:
	I0717 13:51:41.162419   55100 kubeadm.go:322] 	timed out waiting for the condition
	I0717 13:51:41.162438   55100 kubeadm.go:322] 
	I0717 13:51:41.162488   55100 kubeadm.go:322] This error is likely caused by:
	I0717 13:51:41.162534   55100 kubeadm.go:322] 	- The kubelet is not running
	I0717 13:51:41.162702   55100 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0717 13:51:41.162718   55100 kubeadm.go:322] 
	I0717 13:51:41.162857   55100 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0717 13:51:41.162912   55100 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0717 13:51:41.162961   55100 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0717 13:51:41.162975   55100 kubeadm.go:322] 
	I0717 13:51:41.163090   55100 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0717 13:51:41.163209   55100 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0717 13:51:41.163311   55100 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0717 13:51:41.163371   55100 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0717 13:51:41.163464   55100 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0717 13:51:41.163508   55100 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0717 13:51:41.165287   55100 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0717 13:51:41.165357   55100 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0717 13:51:41.165476   55100 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.4. Latest validated version: 18.09
	I0717 13:51:41.165572   55100 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 13:51:41.165643   55100 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0717 13:51:41.165703   55100 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0717 13:51:41.165735   55100 kubeadm.go:406] StartCluster complete in 8m5.326883161s
	I0717 13:51:41.165833   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 13:51:41.184757   55100 logs.go:284] 0 containers: []
	W0717 13:51:41.184770   55100 logs.go:286] No container was found matching "kube-apiserver"
	I0717 13:51:41.184842   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 13:51:41.204553   55100 logs.go:284] 0 containers: []
	W0717 13:51:41.204565   55100 logs.go:286] No container was found matching "etcd"
	I0717 13:51:41.204631   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 13:51:41.223474   55100 logs.go:284] 0 containers: []
	W0717 13:51:41.223487   55100 logs.go:286] No container was found matching "coredns"
	I0717 13:51:41.223557   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 13:51:41.244430   55100 logs.go:284] 0 containers: []
	W0717 13:51:41.244444   55100 logs.go:286] No container was found matching "kube-scheduler"
	I0717 13:51:41.244517   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 13:51:41.264190   55100 logs.go:284] 0 containers: []
	W0717 13:51:41.264205   55100 logs.go:286] No container was found matching "kube-proxy"
	I0717 13:51:41.264275   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 13:51:41.283668   55100 logs.go:284] 0 containers: []
	W0717 13:51:41.283681   55100 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 13:51:41.283748   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 13:51:41.302922   55100 logs.go:284] 0 containers: []
	W0717 13:51:41.302936   55100 logs.go:286] No container was found matching "kindnet"
	I0717 13:51:41.303004   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 13:51:41.321395   55100 logs.go:284] 0 containers: []
	W0717 13:51:41.321409   55100 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 13:51:41.321416   55100 logs.go:123] Gathering logs for Docker ...
	I0717 13:51:41.321423   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 13:51:41.336937   55100 logs.go:123] Gathering logs for container status ...
	I0717 13:51:41.336949   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 13:51:41.390739   55100 logs.go:123] Gathering logs for kubelet ...
	I0717 13:51:41.390753   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 13:51:41.431208   55100 logs.go:123] Gathering logs for dmesg ...
	I0717 13:51:41.431223   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 13:51:41.445292   55100 logs.go:123] Gathering logs for describe nodes ...
	I0717 13:51:41.445310   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 13:51:41.500320   55100 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W0717 13:51:41.500378   55100 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.4. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0717 13:51:41.500400   55100 out.go:239] * 
	* 
	W0717 13:51:41.500438   55100 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.4. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.4. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0717 13:51:41.500453   55100 out.go:239] * 
	* 
	W0717 13:51:41.501086   55100 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 13:51:41.565607   55100 out.go:177] 
	W0717 13:51:41.628465   55100 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.4. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.4. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0717 13:51:41.628528   55100 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0717 13:51:41.628551   55100 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0717 13:51:41.649892   55100 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-amd64 start -p old-k8s-version-378000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-378000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-378000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b5cf72528f711ac4193d0a9a8f59a539b1c9a82e56b492c4504ac5a55d9b9666",
	        "Created": "2023-07-17T20:37:05.574347632Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 741668,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-07-17T20:43:18.243592347Z",
	            "FinishedAt": "2023-07-17T20:43:15.526421136Z"
	        },
	        "Image": "sha256:c6cc01e6091959400f260dc442708e7c71630b58dab1f7c344cb00926bd84950",
	        "ResolvConfPath": "/var/lib/docker/containers/b5cf72528f711ac4193d0a9a8f59a539b1c9a82e56b492c4504ac5a55d9b9666/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b5cf72528f711ac4193d0a9a8f59a539b1c9a82e56b492c4504ac5a55d9b9666/hostname",
	        "HostsPath": "/var/lib/docker/containers/b5cf72528f711ac4193d0a9a8f59a539b1c9a82e56b492c4504ac5a55d9b9666/hosts",
	        "LogPath": "/var/lib/docker/containers/b5cf72528f711ac4193d0a9a8f59a539b1c9a82e56b492c4504ac5a55d9b9666/b5cf72528f711ac4193d0a9a8f59a539b1c9a82e56b492c4504ac5a55d9b9666-json.log",
	        "Name": "/old-k8s-version-378000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-378000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-378000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/92e903a37111c1be0b41a42b0b482279b759da84e66b3f0a99d79bad046a816b-init/diff:/var/lib/docker/overlay2/e56ac82b253363a3e2a8ef1d32b035837a0160e70c091e0204df14a88b273cb0/diff",
	                "MergedDir": "/var/lib/docker/overlay2/92e903a37111c1be0b41a42b0b482279b759da84e66b3f0a99d79bad046a816b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/92e903a37111c1be0b41a42b0b482279b759da84e66b3f0a99d79bad046a816b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/92e903a37111c1be0b41a42b0b482279b759da84e66b3f0a99d79bad046a816b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-378000",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-378000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-378000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-378000",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-378000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "bb4bd38a73f8a928238b33fdcf768f03d1f6e61affe96cf87d115fa3b560c787",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59374"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59375"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59376"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59377"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59373"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/bb4bd38a73f8",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-378000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "b5cf72528f71",
	                        "old-k8s-version-378000"
	                    ],
	                    "NetworkID": "c3d985d4d6f8171a299a582295ee1a9b4b599d36307c61b13f7920634885fa85",
	                    "EndpointID": "c5672ca1166bb360f9c668d41d9fb619c5567113751944a7f3e23dab53a7fe9a",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-378000 -n old-k8s-version-378000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-378000 -n old-k8s-version-378000: exit status 2 (359.140125ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p old-k8s-version-378000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p old-k8s-version-378000 logs -n 25: (1.392696185s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p kubenet-859000 sudo                                 | kubenet-859000         | jenkins | v1.30.1 | 17 Jul 23 13:37 PDT | 17 Jul 23 13:37 PDT |
	|         | containerd config dump                                 |                        |         |         |                     |                     |
	| ssh     | -p kubenet-859000 sudo                                 | kubenet-859000         | jenkins | v1.30.1 | 17 Jul 23 13:37 PDT |                     |
	|         | systemctl status crio --all                            |                        |         |         |                     |                     |
	|         | --full --no-pager                                      |                        |         |         |                     |                     |
	| ssh     | -p kubenet-859000 sudo                                 | kubenet-859000         | jenkins | v1.30.1 | 17 Jul 23 13:37 PDT | 17 Jul 23 13:37 PDT |
	|         | systemctl cat crio --no-pager                          |                        |         |         |                     |                     |
	| ssh     | -p kubenet-859000 sudo find                            | kubenet-859000         | jenkins | v1.30.1 | 17 Jul 23 13:37 PDT | 17 Jul 23 13:37 PDT |
	|         | /etc/crio -type f -exec sh -c                          |                        |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                        |         |         |                     |                     |
	| ssh     | -p kubenet-859000 sudo crio                            | kubenet-859000         | jenkins | v1.30.1 | 17 Jul 23 13:37 PDT | 17 Jul 23 13:37 PDT |
	|         | config                                                 |                        |         |         |                     |                     |
	| delete  | -p kubenet-859000                                      | kubenet-859000         | jenkins | v1.30.1 | 17 Jul 23 13:37 PDT | 17 Jul 23 13:37 PDT |
	| start   | -p no-preload-148000                                   | no-preload-148000      | jenkins | v1.30.1 | 17 Jul 23 13:37 PDT | 17 Jul 23 13:38 PDT |
	|         | --memory=2200                                          |                        |         |         |                     |                     |
	|         | --alsologtostderr                                      |                        |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                        |         |         |                     |                     |
	|         | --driver=docker                                        |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                           |                        |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-148000             | no-preload-148000      | jenkins | v1.30.1 | 17 Jul 23 13:38 PDT | 17 Jul 23 13:39 PDT |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                        |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                        |         |         |                     |                     |
	| stop    | -p no-preload-148000                                   | no-preload-148000      | jenkins | v1.30.1 | 17 Jul 23 13:39 PDT | 17 Jul 23 13:39 PDT |
	|         | --alsologtostderr -v=3                                 |                        |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-148000                  | no-preload-148000      | jenkins | v1.30.1 | 17 Jul 23 13:39 PDT | 17 Jul 23 13:39 PDT |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                        |         |         |                     |                     |
	| start   | -p no-preload-148000                                   | no-preload-148000      | jenkins | v1.30.1 | 17 Jul 23 13:39 PDT | 17 Jul 23 13:44 PDT |
	|         | --memory=2200                                          |                        |         |         |                     |                     |
	|         | --alsologtostderr                                      |                        |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                        |         |         |                     |                     |
	|         | --driver=docker                                        |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                           |                        |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-378000        | old-k8s-version-378000 | jenkins | v1.30.1 | 17 Jul 23 13:41 PDT |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                        |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                        |         |         |                     |                     |
	| stop    | -p old-k8s-version-378000                              | old-k8s-version-378000 | jenkins | v1.30.1 | 17 Jul 23 13:43 PDT | 17 Jul 23 13:43 PDT |
	|         | --alsologtostderr -v=3                                 |                        |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-378000             | old-k8s-version-378000 | jenkins | v1.30.1 | 17 Jul 23 13:43 PDT | 17 Jul 23 13:43 PDT |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                        |         |         |                     |                     |
	| start   | -p old-k8s-version-378000                              | old-k8s-version-378000 | jenkins | v1.30.1 | 17 Jul 23 13:43 PDT |                     |
	|         | --memory=2200                                          |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                        |         |         |                     |                     |
	|         | --kvm-network=default                                  |                        |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                        |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                        |         |         |                     |                     |
	|         | --keep-context=false                                   |                        |         |         |                     |                     |
	|         | --driver=docker                                        |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                        |         |         |                     |                     |
	| ssh     | -p no-preload-148000 sudo                              | no-preload-148000      | jenkins | v1.30.1 | 17 Jul 23 13:45 PDT | 17 Jul 23 13:45 PDT |
	|         | crictl images -o json                                  |                        |         |         |                     |                     |
	| pause   | -p no-preload-148000                                   | no-preload-148000      | jenkins | v1.30.1 | 17 Jul 23 13:45 PDT | 17 Jul 23 13:45 PDT |
	|         | --alsologtostderr -v=1                                 |                        |         |         |                     |                     |
	| unpause | -p no-preload-148000                                   | no-preload-148000      | jenkins | v1.30.1 | 17 Jul 23 13:45 PDT | 17 Jul 23 13:45 PDT |
	|         | --alsologtostderr -v=1                                 |                        |         |         |                     |                     |
	| delete  | -p no-preload-148000                                   | no-preload-148000      | jenkins | v1.30.1 | 17 Jul 23 13:45 PDT | 17 Jul 23 13:45 PDT |
	| delete  | -p no-preload-148000                                   | no-preload-148000      | jenkins | v1.30.1 | 17 Jul 23 13:45 PDT | 17 Jul 23 13:45 PDT |
	| start   | -p embed-certs-688000                                  | embed-certs-688000     | jenkins | v1.30.1 | 17 Jul 23 13:45 PDT | 17 Jul 23 13:46 PDT |
	|         | --memory=2200                                          |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                        |         |         |                     |                     |
	|         | --embed-certs --driver=docker                          |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                           |                        |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-688000            | embed-certs-688000     | jenkins | v1.30.1 | 17 Jul 23 13:46 PDT | 17 Jul 23 13:46 PDT |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                        |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                        |         |         |                     |                     |
	| stop    | -p embed-certs-688000                                  | embed-certs-688000     | jenkins | v1.30.1 | 17 Jul 23 13:46 PDT | 17 Jul 23 13:46 PDT |
	|         | --alsologtostderr -v=3                                 |                        |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-688000                 | embed-certs-688000     | jenkins | v1.30.1 | 17 Jul 23 13:46 PDT | 17 Jul 23 13:46 PDT |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                        |         |         |                     |                     |
	| start   | -p embed-certs-688000                                  | embed-certs-688000     | jenkins | v1.30.1 | 17 Jul 23 13:46 PDT |                     |
	|         | --memory=2200                                          |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                        |         |         |                     |                     |
	|         | --embed-certs --driver=docker                          |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                           |                        |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/07/17 13:46:24
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.20.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 13:46:24.219213   55618 out.go:296] Setting OutFile to fd 1 ...
	I0717 13:46:24.219378   55618 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 13:46:24.219383   55618 out.go:309] Setting ErrFile to fd 2...
	I0717 13:46:24.219388   55618 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 13:46:24.219562   55618 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/16890-37879/.minikube/bin
	I0717 13:46:24.220852   55618 out.go:303] Setting JSON to false
	I0717 13:46:24.239602   55618 start.go:128] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":17155,"bootTime":1689609629,"procs":392,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0717 13:46:24.239689   55618 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0717 13:46:24.263536   55618 out.go:177] * [embed-certs-688000] minikube v1.30.1 on Darwin 13.4.1
	I0717 13:46:24.305685   55618 notify.go:220] Checking for updates...
	I0717 13:46:24.305712   55618 out.go:177]   - MINIKUBE_LOCATION=16890
	I0717 13:46:24.327521   55618 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16890-37879/kubeconfig
	I0717 13:46:24.348557   55618 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0717 13:46:24.369605   55618 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 13:46:24.390676   55618 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16890-37879/.minikube
	I0717 13:46:24.411584   55618 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 13:46:24.432876   55618 config.go:182] Loaded profile config "embed-certs-688000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0717 13:46:24.433405   55618 driver.go:373] Setting default libvirt URI to qemu:///system
	I0717 13:46:24.489105   55618 docker.go:121] docker version: linux-24.0.2:Docker Desktop 4.21.1 (114176)
	I0717 13:46:24.489220   55618 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 13:46:24.586951   55618 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:63 OomKillDisable:false NGoroutines:68 SystemTime:2023-07-17 20:46:24.576111027 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:5.15.49-linuxkit-pr OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexS
erverAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=built
in name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Pat
h:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.6] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Command line tool for Docker Scout Vendor:Docker Inc. Version:0.16.1]] Warnings:<n
il>}}
	I0717 13:46:24.608205   55618 out.go:177] * Using the docker driver based on existing profile
	I0717 13:46:24.629110   55618 start.go:298] selected driver: docker
	I0717 13:46:24.629138   55618 start.go:880] validating driver "docker" against &{Name:embed-certs-688000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:embed-certs-688000 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 13:46:24.629279   55618 start.go:891] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 13:46:24.633052   55618 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 13:46:24.731233   55618 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:63 OomKillDisable:false NGoroutines:68 SystemTime:2023-07-17 20:46:24.719994005 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:5.15.49-linuxkit-pr OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexS
erverAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=built
in name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Pat
h:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.6] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Command line tool for Docker Scout Vendor:Docker Inc. Version:0.16.1]] Warnings:<n
il>}}
	I0717 13:46:24.731447   55618 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 13:46:24.731469   55618 cni.go:84] Creating CNI manager for ""
	I0717 13:46:24.731480   55618 cni.go:149] "docker" driver + "docker" runtime found, recommending kindnet
	I0717 13:46:24.732097   55618 start_flags.go:319] config:
	{Name:embed-certs-688000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:embed-certs-688000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: Netwo
rkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize
:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 13:46:24.754083   55618 out.go:177] * Starting control plane node embed-certs-688000 in cluster embed-certs-688000
	I0717 13:46:24.801693   55618 cache.go:122] Beginning downloading kic base image for docker with docker
	I0717 13:46:24.821498   55618 out.go:177] * Pulling base image ...
	I0717 13:46:24.863563   55618 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0717 13:46:24.863574   55618 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon
	I0717 13:46:24.863628   55618 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16890-37879/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-amd64.tar.lz4
	I0717 13:46:24.863640   55618 cache.go:57] Caching tarball of preloaded images
	I0717 13:46:24.863744   55618 preload.go:174] Found /Users/jenkins/minikube-integration/16890-37879/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0717 13:46:24.863755   55618 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0717 13:46:24.864350   55618 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/embed-certs-688000/config.json ...
	I0717 13:46:24.913765   55618 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon, skipping pull
	I0717 13:46:24.913780   55618 cache.go:145] gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 exists in daemon, skipping load
	I0717 13:46:24.913801   55618 cache.go:195] Successfully downloaded all kic artifacts
	I0717 13:46:24.913839   55618 start.go:365] acquiring machines lock for embed-certs-688000: {Name:mk89f3b7ec186c088e15de073eb196803c333dd6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 13:46:24.913924   55618 start.go:369] acquired machines lock for "embed-certs-688000" in 67.715µs
	I0717 13:46:24.913951   55618 start.go:96] Skipping create...Using existing machine configuration
	I0717 13:46:24.913959   55618 fix.go:54] fixHost starting: 
	I0717 13:46:24.914213   55618 cli_runner.go:164] Run: docker container inspect embed-certs-688000 --format={{.State.Status}}
	I0717 13:46:24.963119   55618 fix.go:102] recreateIfNeeded on embed-certs-688000: state=Stopped err=<nil>
	W0717 13:46:24.963157   55618 fix.go:128] unexpected machine state, will restart: <nil>
	I0717 13:46:25.005265   55618 out.go:177] * Restarting existing docker container for "embed-certs-688000" ...
	I0717 13:46:22.154836   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:46:22.166995   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 13:46:22.188450   55100 logs.go:284] 0 containers: []
	W0717 13:46:22.188463   55100 logs.go:286] No container was found matching "kube-apiserver"
	I0717 13:46:22.188519   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 13:46:22.207768   55100 logs.go:284] 0 containers: []
	W0717 13:46:22.207781   55100 logs.go:286] No container was found matching "etcd"
	I0717 13:46:22.207848   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 13:46:22.227273   55100 logs.go:284] 0 containers: []
	W0717 13:46:22.227287   55100 logs.go:286] No container was found matching "coredns"
	I0717 13:46:22.227356   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 13:46:22.247452   55100 logs.go:284] 0 containers: []
	W0717 13:46:22.247465   55100 logs.go:286] No container was found matching "kube-scheduler"
	I0717 13:46:22.247531   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 13:46:22.266374   55100 logs.go:284] 0 containers: []
	W0717 13:46:22.266388   55100 logs.go:286] No container was found matching "kube-proxy"
	I0717 13:46:22.266456   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 13:46:22.285790   55100 logs.go:284] 0 containers: []
	W0717 13:46:22.285803   55100 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 13:46:22.285873   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 13:46:22.305523   55100 logs.go:284] 0 containers: []
	W0717 13:46:22.305537   55100 logs.go:286] No container was found matching "kindnet"
	I0717 13:46:22.305606   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 13:46:22.325494   55100 logs.go:284] 0 containers: []
	W0717 13:46:22.325508   55100 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 13:46:22.325516   55100 logs.go:123] Gathering logs for kubelet ...
	I0717 13:46:22.325524   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 13:46:22.364390   55100 logs.go:123] Gathering logs for dmesg ...
	I0717 13:46:22.364404   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 13:46:22.378050   55100 logs.go:123] Gathering logs for describe nodes ...
	I0717 13:46:22.378064   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 13:46:22.433109   55100 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 13:46:22.433122   55100 logs.go:123] Gathering logs for Docker ...
	I0717 13:46:22.433131   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 13:46:22.448620   55100 logs.go:123] Gathering logs for container status ...
	I0717 13:46:22.448633   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 13:46:25.000531   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:46:25.012963   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 13:46:25.031370   55100 logs.go:284] 0 containers: []
	W0717 13:46:25.031382   55100 logs.go:286] No container was found matching "kube-apiserver"
	I0717 13:46:25.031464   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 13:46:25.052247   55100 logs.go:284] 0 containers: []
	W0717 13:46:25.052261   55100 logs.go:286] No container was found matching "etcd"
	I0717 13:46:25.052336   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 13:46:25.073589   55100 logs.go:284] 0 containers: []
	W0717 13:46:25.073601   55100 logs.go:286] No container was found matching "coredns"
	I0717 13:46:25.073671   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 13:46:25.093596   55100 logs.go:284] 0 containers: []
	W0717 13:46:25.093610   55100 logs.go:286] No container was found matching "kube-scheduler"
	I0717 13:46:25.093680   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 13:46:25.112611   55100 logs.go:284] 0 containers: []
	W0717 13:46:25.112624   55100 logs.go:286] No container was found matching "kube-proxy"
	I0717 13:46:25.112697   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 13:46:25.132977   55100 logs.go:284] 0 containers: []
	W0717 13:46:25.132992   55100 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 13:46:25.133062   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 13:46:25.154791   55100 logs.go:284] 0 containers: []
	W0717 13:46:25.154804   55100 logs.go:286] No container was found matching "kindnet"
	I0717 13:46:25.154886   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 13:46:25.175128   55100 logs.go:284] 0 containers: []
	W0717 13:46:25.175153   55100 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 13:46:25.175165   55100 logs.go:123] Gathering logs for kubelet ...
	I0717 13:46:25.175177   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 13:46:25.217623   55100 logs.go:123] Gathering logs for dmesg ...
	I0717 13:46:25.217658   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 13:46:25.232708   55100 logs.go:123] Gathering logs for describe nodes ...
	I0717 13:46:25.232723   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 13:46:25.294654   55100 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 13:46:25.294666   55100 logs.go:123] Gathering logs for Docker ...
	I0717 13:46:25.294674   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 13:46:25.312207   55100 logs.go:123] Gathering logs for container status ...
	I0717 13:46:25.312224   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 13:46:25.026429   55618 cli_runner.go:164] Run: docker start embed-certs-688000
	I0717 13:46:25.274936   55618 cli_runner.go:164] Run: docker container inspect embed-certs-688000 --format={{.State.Status}}
	I0717 13:46:25.332622   55618 kic.go:426] container "embed-certs-688000" state is running.
	I0717 13:46:25.333253   55618 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-688000
	I0717 13:46:25.389609   55618 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/embed-certs-688000/config.json ...
	I0717 13:46:25.390053   55618 machine.go:88] provisioning docker machine ...
	I0717 13:46:25.390087   55618 ubuntu.go:169] provisioning hostname "embed-certs-688000"
	I0717 13:46:25.390191   55618 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-688000
	I0717 13:46:25.443400   55618 main.go:141] libmachine: Using SSH client type: native
	I0717 13:46:25.443837   55618 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140d2c0] 0x1410360 <nil>  [] 0s} 127.0.0.1 59515 <nil> <nil>}
	I0717 13:46:25.443851   55618 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-688000 && echo "embed-certs-688000" | sudo tee /etc/hostname
	I0717 13:46:25.444943   55618 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0717 13:46:28.587066   55618 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-688000
	
	I0717 13:46:28.587180   55618 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-688000
	I0717 13:46:28.638251   55618 main.go:141] libmachine: Using SSH client type: native
	I0717 13:46:28.638590   55618 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140d2c0] 0x1410360 <nil>  [] 0s} 127.0.0.1 59515 <nil> <nil>}
	I0717 13:46:28.638605   55618 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-688000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-688000/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-688000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 13:46:28.767658   55618 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 13:46:28.767678   55618 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/16890-37879/.minikube CaCertPath:/Users/jenkins/minikube-integration/16890-37879/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/16890-37879/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/16890-37879/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/16890-37879/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/16890-37879/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/16890-37879/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/16890-37879/.minikube}
	I0717 13:46:28.767702   55618 ubuntu.go:177] setting up certificates
	I0717 13:46:28.767710   55618 provision.go:83] configureAuth start
	I0717 13:46:28.767785   55618 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-688000
	I0717 13:46:28.817809   55618 provision.go:138] copyHostCerts
	I0717 13:46:28.817901   55618 exec_runner.go:144] found /Users/jenkins/minikube-integration/16890-37879/.minikube/ca.pem, removing ...
	I0717 13:46:28.817911   55618 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/16890-37879/.minikube/ca.pem
	I0717 13:46:28.818045   55618 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16890-37879/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/16890-37879/.minikube/ca.pem (1078 bytes)
	I0717 13:46:28.818280   55618 exec_runner.go:144] found /Users/jenkins/minikube-integration/16890-37879/.minikube/cert.pem, removing ...
	I0717 13:46:28.818286   55618 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/16890-37879/.minikube/cert.pem
	I0717 13:46:28.818351   55618 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16890-37879/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/16890-37879/.minikube/cert.pem (1123 bytes)
	I0717 13:46:28.818526   55618 exec_runner.go:144] found /Users/jenkins/minikube-integration/16890-37879/.minikube/key.pem, removing ...
	I0717 13:46:28.818531   55618 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/16890-37879/.minikube/key.pem
	I0717 13:46:28.818595   55618 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16890-37879/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/16890-37879/.minikube/key.pem (1679 bytes)
	I0717 13:46:28.818738   55618 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/16890-37879/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/16890-37879/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/16890-37879/.minikube/certs/ca-key.pem org=jenkins.embed-certs-688000 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube embed-certs-688000]
	I0717 13:46:28.986440   55618 provision.go:172] copyRemoteCerts
	I0717 13:46:28.986499   55618 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 13:46:28.986561   55618 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-688000
	I0717 13:46:29.037759   55618 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59515 SSHKeyPath:/Users/jenkins/minikube-integration/16890-37879/.minikube/machines/embed-certs-688000/id_rsa Username:docker}
	I0717 13:46:29.131470   55618 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16890-37879/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 13:46:29.153371   55618 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16890-37879/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0717 13:46:29.174463   55618 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16890-37879/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 13:46:29.195418   55618 provision.go:86] duration metric: configureAuth took 427.693003ms
	I0717 13:46:29.195433   55618 ubuntu.go:193] setting minikube options for container-runtime
	I0717 13:46:29.195584   55618 config.go:182] Loaded profile config "embed-certs-688000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0717 13:46:29.195660   55618 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-688000
	I0717 13:46:29.249235   55618 main.go:141] libmachine: Using SSH client type: native
	I0717 13:46:29.249575   55618 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140d2c0] 0x1410360 <nil>  [] 0s} 127.0.0.1 59515 <nil> <nil>}
	I0717 13:46:29.249586   55618 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0717 13:46:29.377257   55618 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0717 13:46:29.377272   55618 ubuntu.go:71] root file system type: overlay
	I0717 13:46:29.377372   55618 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0717 13:46:29.377468   55618 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-688000
	I0717 13:46:29.428240   55618 main.go:141] libmachine: Using SSH client type: native
	I0717 13:46:29.428585   55618 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140d2c0] 0x1410360 <nil>  [] 0s} 127.0.0.1 59515 <nil> <nil>}
	I0717 13:46:29.428641   55618 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0717 13:46:29.567687   55618 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0717 13:46:29.567816   55618 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-688000
	I0717 13:46:29.618319   55618 main.go:141] libmachine: Using SSH client type: native
	I0717 13:46:29.618665   55618 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140d2c0] 0x1410360 <nil>  [] 0s} 127.0.0.1 59515 <nil> <nil>}
	I0717 13:46:29.618677   55618 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0717 13:46:29.752411   55618 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 13:46:29.752426   55618 machine.go:91] provisioned docker machine in 4.362370427s
	I0717 13:46:29.752436   55618 start.go:300] post-start starting for "embed-certs-688000" (driver="docker")
	I0717 13:46:29.752447   55618 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 13:46:29.752518   55618 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 13:46:29.752573   55618 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-688000
	I0717 13:46:29.802004   55618 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59515 SSHKeyPath:/Users/jenkins/minikube-integration/16890-37879/.minikube/machines/embed-certs-688000/id_rsa Username:docker}
	I0717 13:46:29.895966   55618 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 13:46:29.900089   55618 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0717 13:46:29.900120   55618 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0717 13:46:29.900128   55618 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0717 13:46:29.900133   55618 info.go:137] Remote host: Ubuntu 22.04.2 LTS
	I0717 13:46:29.900141   55618 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16890-37879/.minikube/addons for local assets ...
	I0717 13:46:29.900230   55618 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16890-37879/.minikube/files for local assets ...
	I0717 13:46:29.900377   55618 filesync.go:149] local asset: /Users/jenkins/minikube-integration/16890-37879/.minikube/files/etc/ssl/certs/383252.pem -> 383252.pem in /etc/ssl/certs
	I0717 13:46:29.900551   55618 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 13:46:29.909173   55618 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16890-37879/.minikube/files/etc/ssl/certs/383252.pem --> /etc/ssl/certs/383252.pem (1708 bytes)
	I0717 13:46:29.930128   55618 start.go:303] post-start completed in 177.681109ms
	I0717 13:46:29.930207   55618 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 13:46:29.930269   55618 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-688000
	I0717 13:46:29.979832   55618 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59515 SSHKeyPath:/Users/jenkins/minikube-integration/16890-37879/.minikube/machines/embed-certs-688000/id_rsa Username:docker}
	I0717 13:46:30.068081   55618 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0717 13:46:30.073502   55618 fix.go:56] fixHost completed within 5.159550996s
	I0717 13:46:30.073517   55618 start.go:83] releasing machines lock for "embed-certs-688000", held for 5.159595546s
	I0717 13:46:30.073603   55618 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-688000
	I0717 13:46:30.123975   55618 ssh_runner.go:195] Run: cat /version.json
	I0717 13:46:30.123974   55618 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 13:46:30.124093   55618 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-688000
	I0717 13:46:30.124125   55618 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-688000
	I0717 13:46:30.176863   55618 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59515 SSHKeyPath:/Users/jenkins/minikube-integration/16890-37879/.minikube/machines/embed-certs-688000/id_rsa Username:docker}
	I0717 13:46:30.176862   55618 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59515 SSHKeyPath:/Users/jenkins/minikube-integration/16890-37879/.minikube/machines/embed-certs-688000/id_rsa Username:docker}
	W0717 13:46:30.367774   55618 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.31.0 -> Actual minikube version: v1.30.1
	I0717 13:46:30.367888   55618 ssh_runner.go:195] Run: systemctl --version
	I0717 13:46:30.373111   55618 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0717 13:46:30.378616   55618 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0717 13:46:30.396298   55618 cni.go:236] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0717 13:46:30.396374   55618 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 13:46:30.405277   55618 cni.go:265] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0717 13:46:30.405296   55618 start.go:469] detecting cgroup driver to use...
	I0717 13:46:30.405317   55618 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0717 13:46:30.405423   55618 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 13:46:30.420609   55618 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0717 13:46:30.430727   55618 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0717 13:46:30.440591   55618 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0717 13:46:30.440653   55618 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0717 13:46:30.450605   55618 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0717 13:46:30.460176   55618 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0717 13:46:30.470251   55618 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0717 13:46:30.480179   55618 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 13:46:30.489897   55618 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0717 13:46:30.499890   55618 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 13:46:30.508498   55618 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 13:46:30.516998   55618 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 13:46:30.586835   55618 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0717 13:46:30.672308   55618 start.go:469] detecting cgroup driver to use...
	I0717 13:46:30.672327   55618 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0717 13:46:30.672401   55618 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0717 13:46:30.684146   55618 cruntime.go:276] skipping containerd shutdown because we are bound to it
	I0717 13:46:30.684215   55618 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0717 13:46:30.695709   55618 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 13:46:30.713442   55618 ssh_runner.go:195] Run: which cri-dockerd
	I0717 13:46:30.718156   55618 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0717 13:46:30.730363   55618 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0717 13:46:30.751862   55618 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0717 13:46:30.863361   55618 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0717 13:46:30.947392   55618 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0717 13:46:30.947409   55618 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0717 13:46:30.967614   55618 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 13:46:31.057609   55618 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0717 13:46:31.334154   55618 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0717 13:46:31.402823   55618 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0717 13:46:31.471726   55618 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0717 13:46:31.544332   55618 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 13:46:31.614457   55618 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0717 13:46:31.628288   55618 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 13:46:31.702433   55618 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0717 13:46:31.774607   55618 start.go:516] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0717 13:46:31.774715   55618 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0717 13:46:31.779546   55618 start.go:537] Will wait 60s for crictl version
	I0717 13:46:31.779613   55618 ssh_runner.go:195] Run: which crictl
	I0717 13:46:31.783892   55618 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 13:46:31.828715   55618 start.go:553] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.4
	RuntimeApiVersion:  v1
	I0717 13:46:31.828790   55618 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0717 13:46:31.852469   55618 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0717 13:46:27.877995   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:46:27.888730   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 13:46:27.907101   55100 logs.go:284] 0 containers: []
	W0717 13:46:27.907114   55100 logs.go:286] No container was found matching "kube-apiserver"
	I0717 13:46:27.907182   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 13:46:27.926233   55100 logs.go:284] 0 containers: []
	W0717 13:46:27.926246   55100 logs.go:286] No container was found matching "etcd"
	I0717 13:46:27.926316   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 13:46:27.944942   55100 logs.go:284] 0 containers: []
	W0717 13:46:27.944956   55100 logs.go:286] No container was found matching "coredns"
	I0717 13:46:27.945027   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 13:46:27.963565   55100 logs.go:284] 0 containers: []
	W0717 13:46:27.963577   55100 logs.go:286] No container was found matching "kube-scheduler"
	I0717 13:46:27.963648   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 13:46:27.982652   55100 logs.go:284] 0 containers: []
	W0717 13:46:27.982667   55100 logs.go:286] No container was found matching "kube-proxy"
	I0717 13:46:27.982734   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 13:46:28.002545   55100 logs.go:284] 0 containers: []
	W0717 13:46:28.002558   55100 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 13:46:28.002629   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 13:46:28.021116   55100 logs.go:284] 0 containers: []
	W0717 13:46:28.021128   55100 logs.go:286] No container was found matching "kindnet"
	I0717 13:46:28.021198   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 13:46:28.040047   55100 logs.go:284] 0 containers: []
	W0717 13:46:28.040061   55100 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 13:46:28.040068   55100 logs.go:123] Gathering logs for container status ...
	I0717 13:46:28.040075   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 13:46:28.090365   55100 logs.go:123] Gathering logs for kubelet ...
	I0717 13:46:28.090379   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 13:46:28.129181   55100 logs.go:123] Gathering logs for dmesg ...
	I0717 13:46:28.129213   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 13:46:28.143355   55100 logs.go:123] Gathering logs for describe nodes ...
	I0717 13:46:28.143371   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 13:46:28.198268   55100 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 13:46:28.198280   55100 logs.go:123] Gathering logs for Docker ...
	I0717 13:46:28.198287   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 13:46:30.713907   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:46:30.724915   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 13:46:30.747634   55100 logs.go:284] 0 containers: []
	W0717 13:46:30.747651   55100 logs.go:286] No container was found matching "kube-apiserver"
	I0717 13:46:30.747731   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 13:46:30.776229   55100 logs.go:284] 0 containers: []
	W0717 13:46:30.776242   55100 logs.go:286] No container was found matching "etcd"
	I0717 13:46:30.776307   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 13:46:30.797856   55100 logs.go:284] 0 containers: []
	W0717 13:46:30.797870   55100 logs.go:286] No container was found matching "coredns"
	I0717 13:46:30.797942   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 13:46:30.819522   55100 logs.go:284] 0 containers: []
	W0717 13:46:30.819542   55100 logs.go:286] No container was found matching "kube-scheduler"
	I0717 13:46:30.819660   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 13:46:30.838759   55100 logs.go:284] 0 containers: []
	W0717 13:46:30.838777   55100 logs.go:286] No container was found matching "kube-proxy"
	I0717 13:46:30.838871   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 13:46:30.862531   55100 logs.go:284] 0 containers: []
	W0717 13:46:30.862548   55100 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 13:46:30.862626   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 13:46:30.886436   55100 logs.go:284] 0 containers: []
	W0717 13:46:30.886457   55100 logs.go:286] No container was found matching "kindnet"
	I0717 13:46:30.886541   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 13:46:30.919397   55100 logs.go:284] 0 containers: []
	W0717 13:46:30.919415   55100 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 13:46:30.919439   55100 logs.go:123] Gathering logs for kubelet ...
	I0717 13:46:30.919458   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 13:46:30.961186   55100 logs.go:123] Gathering logs for dmesg ...
	I0717 13:46:30.961209   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 13:46:30.977283   55100 logs.go:123] Gathering logs for describe nodes ...
	I0717 13:46:30.977301   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 13:46:31.042538   55100 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 13:46:31.042554   55100 logs.go:123] Gathering logs for Docker ...
	I0717 13:46:31.042560   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 13:46:31.059676   55100 logs.go:123] Gathering logs for container status ...
	I0717 13:46:31.059688   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 13:46:31.922646   55618 out.go:204] * Preparing Kubernetes v1.27.3 on Docker 24.0.4 ...
	I0717 13:46:31.922824   55618 cli_runner.go:164] Run: docker exec -t embed-certs-688000 dig +short host.docker.internal
	I0717 13:46:32.044131   55618 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I0717 13:46:32.044262   55618 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I0717 13:46:32.049397   55618 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 13:46:32.060182   55618 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-688000
	I0717 13:46:32.129661   55618 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0717 13:46:32.129746   55618 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0717 13:46:32.149985   55618 docker.go:636] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.27.3
	registry.k8s.io/kube-scheduler:v1.27.3
	registry.k8s.io/kube-proxy:v1.27.3
	registry.k8s.io/kube-controller-manager:v1.27.3
	kindest/kindnetd:v20230511-dc714da8
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0717 13:46:32.150004   55618 docker.go:566] Images already preloaded, skipping extraction
	I0717 13:46:32.150084   55618 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0717 13:46:32.169893   55618 docker.go:636] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.27.3
	registry.k8s.io/kube-controller-manager:v1.27.3
	registry.k8s.io/kube-scheduler:v1.27.3
	registry.k8s.io/kube-proxy:v1.27.3
	kindest/kindnetd:v20230511-dc714da8
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0717 13:46:32.169919   55618 cache_images.go:84] Images are preloaded, skipping loading
	I0717 13:46:32.170000   55618 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0717 13:46:32.222005   55618 cni.go:84] Creating CNI manager for ""
	I0717 13:46:32.222039   55618 cni.go:149] "docker" driver + "docker" runtime found, recommending kindnet
	I0717 13:46:32.222072   55618 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0717 13:46:32.222103   55618 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.27.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-688000 NodeName:embed-certs-688000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 13:46:32.222270   55618 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "embed-certs-688000"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 13:46:32.222395   55618 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=embed-certs-688000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.3 ClusterName:embed-certs-688000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0717 13:46:32.222503   55618 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.3
	I0717 13:46:32.232481   55618 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 13:46:32.232557   55618 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 13:46:32.241779   55618 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0717 13:46:32.259549   55618 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 13:46:32.275505   55618 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2101 bytes)
	I0717 13:46:32.291311   55618 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0717 13:46:32.295730   55618 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 13:46:32.306822   55618 certs.go:56] Setting up /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/embed-certs-688000 for IP: 192.168.76.2
	I0717 13:46:32.306842   55618 certs.go:190] acquiring lock for shared ca certs: {Name:mkcb761e9710dc67a00cbdee9d78e096db7e9bb9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 13:46:32.307024   55618 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/16890-37879/.minikube/ca.key
	I0717 13:46:32.307116   55618 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/16890-37879/.minikube/proxy-client-ca.key
	I0717 13:46:32.307212   55618 certs.go:315] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/embed-certs-688000/client.key
	I0717 13:46:32.307275   55618 certs.go:315] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/embed-certs-688000/apiserver.key.31bdca25
	I0717 13:46:32.307324   55618 certs.go:315] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/embed-certs-688000/proxy-client.key
	I0717 13:46:32.307518   55618 certs.go:437] found cert: /Users/jenkins/minikube-integration/16890-37879/.minikube/certs/Users/jenkins/minikube-integration/16890-37879/.minikube/certs/38325.pem (1338 bytes)
	W0717 13:46:32.307555   55618 certs.go:433] ignoring /Users/jenkins/minikube-integration/16890-37879/.minikube/certs/Users/jenkins/minikube-integration/16890-37879/.minikube/certs/38325_empty.pem, impossibly tiny 0 bytes
	I0717 13:46:32.307566   55618 certs.go:437] found cert: /Users/jenkins/minikube-integration/16890-37879/.minikube/certs/Users/jenkins/minikube-integration/16890-37879/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 13:46:32.307603   55618 certs.go:437] found cert: /Users/jenkins/minikube-integration/16890-37879/.minikube/certs/Users/jenkins/minikube-integration/16890-37879/.minikube/certs/ca.pem (1078 bytes)
	I0717 13:46:32.307640   55618 certs.go:437] found cert: /Users/jenkins/minikube-integration/16890-37879/.minikube/certs/Users/jenkins/minikube-integration/16890-37879/.minikube/certs/cert.pem (1123 bytes)
	I0717 13:46:32.307672   55618 certs.go:437] found cert: /Users/jenkins/minikube-integration/16890-37879/.minikube/certs/Users/jenkins/minikube-integration/16890-37879/.minikube/certs/key.pem (1679 bytes)
	I0717 13:46:32.307740   55618 certs.go:437] found cert: /Users/jenkins/minikube-integration/16890-37879/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/16890-37879/.minikube/files/etc/ssl/certs/383252.pem (1708 bytes)
	I0717 13:46:32.308316   55618 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/embed-certs-688000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0717 13:46:32.330346   55618 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/embed-certs-688000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 13:46:32.353113   55618 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/embed-certs-688000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 13:46:32.375516   55618 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/embed-certs-688000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0717 13:46:32.397392   55618 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16890-37879/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 13:46:32.419072   55618 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16890-37879/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 13:46:32.440397   55618 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16890-37879/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 13:46:32.463098   55618 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16890-37879/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0717 13:46:32.484343   55618 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16890-37879/.minikube/certs/38325.pem --> /usr/share/ca-certificates/38325.pem (1338 bytes)
	I0717 13:46:32.505899   55618 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16890-37879/.minikube/files/etc/ssl/certs/383252.pem --> /usr/share/ca-certificates/383252.pem (1708 bytes)
	I0717 13:46:32.527213   55618 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16890-37879/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 13:46:32.548675   55618 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 13:46:32.564623   55618 ssh_runner.go:195] Run: openssl version
	I0717 13:46:32.570509   55618 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/38325.pem && ln -fs /usr/share/ca-certificates/38325.pem /etc/ssl/certs/38325.pem"
	I0717 13:46:32.580045   55618 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/38325.pem
	I0717 13:46:32.584370   55618 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jul 17 19:49 /usr/share/ca-certificates/38325.pem
	I0717 13:46:32.584413   55618 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/38325.pem
	I0717 13:46:32.591168   55618 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/38325.pem /etc/ssl/certs/51391683.0"
	I0717 13:46:32.600170   55618 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/383252.pem && ln -fs /usr/share/ca-certificates/383252.pem /etc/ssl/certs/383252.pem"
	I0717 13:46:32.609457   55618 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/383252.pem
	I0717 13:46:32.613655   55618 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jul 17 19:49 /usr/share/ca-certificates/383252.pem
	I0717 13:46:32.613704   55618 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/383252.pem
	I0717 13:46:32.620577   55618 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/383252.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 13:46:32.629412   55618 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 13:46:32.638837   55618 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 13:46:32.643242   55618 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jul 17 19:44 /usr/share/ca-certificates/minikubeCA.pem
	I0717 13:46:32.643291   55618 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 13:46:32.650206   55618 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 13:46:32.659137   55618 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0717 13:46:32.663303   55618 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 13:46:32.670078   55618 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 13:46:32.676808   55618 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 13:46:32.683723   55618 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 13:46:32.690771   55618 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 13:46:32.697531   55618 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 13:46:32.704271   55618 kubeadm.go:404] StartCluster: {Name:embed-certs-688000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:embed-certs-688000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSD
omain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host
Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 13:46:32.704386   55618 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0717 13:46:32.724815   55618 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 13:46:32.733988   55618 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0717 13:46:32.734005   55618 kubeadm.go:636] restartCluster start
	I0717 13:46:32.734059   55618 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 13:46:32.742642   55618 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 13:46:32.742720   55618 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-688000
	I0717 13:46:32.793458   55618 kubeconfig.go:135] verify returned: extract IP: "embed-certs-688000" does not appear in /Users/jenkins/minikube-integration/16890-37879/kubeconfig
	I0717 13:46:32.793627   55618 kubeconfig.go:146] "embed-certs-688000" context is missing from /Users/jenkins/minikube-integration/16890-37879/kubeconfig - will repair!
	I0717 13:46:32.793955   55618 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16890-37879/kubeconfig: {Name:mk0f5d923a936f4479f634933efc75403106a170 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 13:46:32.795504   55618 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 13:46:32.804689   55618 api_server.go:166] Checking apiserver status ...
	I0717 13:46:32.804765   55618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 13:46:32.814798   55618 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 13:46:33.316926   55618 api_server.go:166] Checking apiserver status ...
	I0717 13:46:33.317060   55618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 13:46:33.329492   55618 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 13:46:33.815985   55618 api_server.go:166] Checking apiserver status ...
	I0717 13:46:33.816078   55618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 13:46:33.826777   55618 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 13:46:33.611526   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:46:33.624046   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 13:46:33.643529   55100 logs.go:284] 0 containers: []
	W0717 13:46:33.643557   55100 logs.go:286] No container was found matching "kube-apiserver"
	I0717 13:46:33.643630   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 13:46:33.663736   55100 logs.go:284] 0 containers: []
	W0717 13:46:33.663748   55100 logs.go:286] No container was found matching "etcd"
	I0717 13:46:33.663813   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 13:46:33.682913   55100 logs.go:284] 0 containers: []
	W0717 13:46:33.682929   55100 logs.go:286] No container was found matching "coredns"
	I0717 13:46:33.682999   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 13:46:33.701433   55100 logs.go:284] 0 containers: []
	W0717 13:46:33.701447   55100 logs.go:286] No container was found matching "kube-scheduler"
	I0717 13:46:33.701516   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 13:46:33.719954   55100 logs.go:284] 0 containers: []
	W0717 13:46:33.719967   55100 logs.go:286] No container was found matching "kube-proxy"
	I0717 13:46:33.720031   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 13:46:33.739111   55100 logs.go:284] 0 containers: []
	W0717 13:46:33.739125   55100 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 13:46:33.739193   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 13:46:33.758008   55100 logs.go:284] 0 containers: []
	W0717 13:46:33.758022   55100 logs.go:286] No container was found matching "kindnet"
	I0717 13:46:33.758090   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 13:46:33.777988   55100 logs.go:284] 0 containers: []
	W0717 13:46:33.778002   55100 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 13:46:33.778009   55100 logs.go:123] Gathering logs for kubelet ...
	I0717 13:46:33.778016   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 13:46:33.815858   55100 logs.go:123] Gathering logs for dmesg ...
	I0717 13:46:33.815872   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 13:46:33.829751   55100 logs.go:123] Gathering logs for describe nodes ...
	I0717 13:46:33.829767   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 13:46:33.922526   55100 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 13:46:33.922537   55100 logs.go:123] Gathering logs for Docker ...
	I0717 13:46:33.922544   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 13:46:33.937869   55100 logs.go:123] Gathering logs for container status ...
	I0717 13:46:33.937882   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 13:46:36.489973   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:46:36.502458   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 13:46:36.521292   55100 logs.go:284] 0 containers: []
	W0717 13:46:36.521306   55100 logs.go:286] No container was found matching "kube-apiserver"
	I0717 13:46:36.521378   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 13:46:36.539953   55100 logs.go:284] 0 containers: []
	W0717 13:46:36.539966   55100 logs.go:286] No container was found matching "etcd"
	I0717 13:46:36.540032   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 13:46:36.559635   55100 logs.go:284] 0 containers: []
	W0717 13:46:36.559647   55100 logs.go:286] No container was found matching "coredns"
	I0717 13:46:36.559713   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 13:46:36.578283   55100 logs.go:284] 0 containers: []
	W0717 13:46:36.578297   55100 logs.go:286] No container was found matching "kube-scheduler"
	I0717 13:46:36.578365   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 13:46:36.597965   55100 logs.go:284] 0 containers: []
	W0717 13:46:36.597979   55100 logs.go:286] No container was found matching "kube-proxy"
	I0717 13:46:36.598048   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 13:46:36.617245   55100 logs.go:284] 0 containers: []
	W0717 13:46:36.617258   55100 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 13:46:36.617332   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 13:46:36.636292   55100 logs.go:284] 0 containers: []
	W0717 13:46:36.636305   55100 logs.go:286] No container was found matching "kindnet"
	I0717 13:46:36.636373   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 13:46:36.656326   55100 logs.go:284] 0 containers: []
	W0717 13:46:36.656338   55100 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 13:46:36.656345   55100 logs.go:123] Gathering logs for kubelet ...
	I0717 13:46:36.656352   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 13:46:36.696542   55100 logs.go:123] Gathering logs for dmesg ...
	I0717 13:46:36.696556   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 13:46:36.710282   55100 logs.go:123] Gathering logs for describe nodes ...
	I0717 13:46:36.710295   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 13:46:36.765199   55100 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 13:46:36.765212   55100 logs.go:123] Gathering logs for Docker ...
	I0717 13:46:36.765219   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 13:46:36.780714   55100 logs.go:123] Gathering logs for container status ...
	I0717 13:46:36.780731   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 13:46:34.316022   55618 api_server.go:166] Checking apiserver status ...
	I0717 13:46:34.316155   55618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 13:46:34.328609   55618 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 13:46:34.815967   55618 api_server.go:166] Checking apiserver status ...
	I0717 13:46:34.816103   55618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 13:46:34.828403   55618 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 13:46:35.315344   55618 api_server.go:166] Checking apiserver status ...
	I0717 13:46:35.315442   55618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 13:46:35.326459   55618 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 13:46:35.815045   55618 api_server.go:166] Checking apiserver status ...
	I0717 13:46:35.815166   55618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 13:46:35.827904   55618 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 13:46:36.316908   55618 api_server.go:166] Checking apiserver status ...
	I0717 13:46:36.317113   55618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 13:46:36.329103   55618 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 13:46:36.815351   55618 api_server.go:166] Checking apiserver status ...
	I0717 13:46:36.815436   55618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 13:46:36.826413   55618 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 13:46:37.315965   55618 api_server.go:166] Checking apiserver status ...
	I0717 13:46:37.316116   55618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 13:46:37.328649   55618 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 13:46:37.816383   55618 api_server.go:166] Checking apiserver status ...
	I0717 13:46:37.816510   55618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 13:46:37.828868   55618 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 13:46:38.316863   55618 api_server.go:166] Checking apiserver status ...
	I0717 13:46:38.316938   55618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 13:46:38.328047   55618 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 13:46:38.816617   55618 api_server.go:166] Checking apiserver status ...
	I0717 13:46:38.816775   55618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 13:46:38.829086   55618 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 13:46:39.333022   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:46:39.343709   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 13:46:39.362610   55100 logs.go:284] 0 containers: []
	W0717 13:46:39.362623   55100 logs.go:286] No container was found matching "kube-apiserver"
	I0717 13:46:39.362685   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 13:46:39.382794   55100 logs.go:284] 0 containers: []
	W0717 13:46:39.382806   55100 logs.go:286] No container was found matching "etcd"
	I0717 13:46:39.382877   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 13:46:39.401748   55100 logs.go:284] 0 containers: []
	W0717 13:46:39.401762   55100 logs.go:286] No container was found matching "coredns"
	I0717 13:46:39.401830   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 13:46:39.420766   55100 logs.go:284] 0 containers: []
	W0717 13:46:39.420780   55100 logs.go:286] No container was found matching "kube-scheduler"
	I0717 13:46:39.420848   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 13:46:39.439128   55100 logs.go:284] 0 containers: []
	W0717 13:46:39.439142   55100 logs.go:286] No container was found matching "kube-proxy"
	I0717 13:46:39.439210   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 13:46:39.457743   55100 logs.go:284] 0 containers: []
	W0717 13:46:39.457756   55100 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 13:46:39.457823   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 13:46:39.476846   55100 logs.go:284] 0 containers: []
	W0717 13:46:39.476862   55100 logs.go:286] No container was found matching "kindnet"
	I0717 13:46:39.476940   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 13:46:39.496475   55100 logs.go:284] 0 containers: []
	W0717 13:46:39.496489   55100 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 13:46:39.496496   55100 logs.go:123] Gathering logs for kubelet ...
	I0717 13:46:39.496503   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 13:46:39.533765   55100 logs.go:123] Gathering logs for dmesg ...
	I0717 13:46:39.533779   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 13:46:39.547153   55100 logs.go:123] Gathering logs for describe nodes ...
	I0717 13:46:39.547166   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 13:46:39.603416   55100 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 13:46:39.603429   55100 logs.go:123] Gathering logs for Docker ...
	I0717 13:46:39.603435   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 13:46:39.619030   55100 logs.go:123] Gathering logs for container status ...
	I0717 13:46:39.619044   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 13:46:39.316314   55618 api_server.go:166] Checking apiserver status ...
	I0717 13:46:39.316499   55618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 13:46:39.328749   55618 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 13:46:39.815159   55618 api_server.go:166] Checking apiserver status ...
	I0717 13:46:39.815248   55618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 13:46:39.825938   55618 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 13:46:40.315534   55618 api_server.go:166] Checking apiserver status ...
	I0717 13:46:40.315702   55618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 13:46:40.328047   55618 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 13:46:40.815913   55618 api_server.go:166] Checking apiserver status ...
	I0717 13:46:40.816081   55618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 13:46:40.828361   55618 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 13:46:41.315312   55618 api_server.go:166] Checking apiserver status ...
	I0717 13:46:41.315397   55618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 13:46:41.326444   55618 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 13:46:41.815880   55618 api_server.go:166] Checking apiserver status ...
	I0717 13:46:41.816010   55618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 13:46:41.827990   55618 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 13:46:42.314920   55618 api_server.go:166] Checking apiserver status ...
	I0717 13:46:42.314988   55618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 13:46:42.325555   55618 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 13:46:42.804911   55618 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0717 13:46:42.804960   55618 kubeadm.go:1128] stopping kube-system containers ...
	I0717 13:46:42.805080   55618 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0717 13:46:42.828568   55618 docker.go:462] Stopping containers: [e836fe6e2146 017d74ad93bf 1722a39db1e3 add0ecf85e3d 46e54a66f6a8 d8398826c2f3 e9b87f651834 2cbe9a0ba5ae 7a037d1c5e1f b434b293572b 91aad732ff31 48c336f38029 15f19d283e56 77637b1861ce fc895a352f1d e9077b086af0 6520ab979e53 6a361a119223 a700ab834719 56f7fc4323dd]
	I0717 13:46:42.828648   55618 ssh_runner.go:195] Run: docker stop e836fe6e2146 017d74ad93bf 1722a39db1e3 add0ecf85e3d 46e54a66f6a8 d8398826c2f3 e9b87f651834 2cbe9a0ba5ae 7a037d1c5e1f b434b293572b 91aad732ff31 48c336f38029 15f19d283e56 77637b1861ce fc895a352f1d e9077b086af0 6520ab979e53 6a361a119223 a700ab834719 56f7fc4323dd
	I0717 13:46:42.852170   55618 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0717 13:46:42.864858   55618 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 13:46:42.874572   55618 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Jul 17 20:45 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Jul 17 20:45 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2011 Jul 17 20:45 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Jul 17 20:45 /etc/kubernetes/scheduler.conf
	
	I0717 13:46:42.874648   55618 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 13:46:42.883959   55618 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 13:46:42.892786   55618 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 13:46:42.901412   55618 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0717 13:46:42.901470   55618 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 13:46:42.910294   55618 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 13:46:42.919113   55618 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0717 13:46:42.919172   55618 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 13:46:42.927447   55618 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 13:46:42.936109   55618 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0717 13:46:42.936126   55618 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 13:46:42.985577   55618 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 13:46:43.590861   55618 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0717 13:46:43.734716   55618 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 13:46:43.785258   55618 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0717 13:46:43.860952   55618 api_server.go:52] waiting for apiserver process to appear ...
	I0717 13:46:43.861050   55618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:46:42.169296   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:46:42.179861   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 13:46:42.198221   55100 logs.go:284] 0 containers: []
	W0717 13:46:42.198234   55100 logs.go:286] No container was found matching "kube-apiserver"
	I0717 13:46:42.198301   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 13:46:42.217279   55100 logs.go:284] 0 containers: []
	W0717 13:46:42.217293   55100 logs.go:286] No container was found matching "etcd"
	I0717 13:46:42.217361   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 13:46:42.236465   55100 logs.go:284] 0 containers: []
	W0717 13:46:42.236479   55100 logs.go:286] No container was found matching "coredns"
	I0717 13:46:42.236548   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 13:46:42.256514   55100 logs.go:284] 0 containers: []
	W0717 13:46:42.256527   55100 logs.go:286] No container was found matching "kube-scheduler"
	I0717 13:46:42.256586   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 13:46:42.275454   55100 logs.go:284] 0 containers: []
	W0717 13:46:42.275467   55100 logs.go:286] No container was found matching "kube-proxy"
	I0717 13:46:42.275522   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 13:46:42.296411   55100 logs.go:284] 0 containers: []
	W0717 13:46:42.296424   55100 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 13:46:42.296479   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 13:46:42.317259   55100 logs.go:284] 0 containers: []
	W0717 13:46:42.317271   55100 logs.go:286] No container was found matching "kindnet"
	I0717 13:46:42.317329   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 13:46:42.337060   55100 logs.go:284] 0 containers: []
	W0717 13:46:42.337073   55100 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 13:46:42.337080   55100 logs.go:123] Gathering logs for kubelet ...
	I0717 13:46:42.337088   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 13:46:42.378914   55100 logs.go:123] Gathering logs for dmesg ...
	I0717 13:46:42.378930   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 13:46:42.392780   55100 logs.go:123] Gathering logs for describe nodes ...
	I0717 13:46:42.392795   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 13:46:42.448139   55100 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 13:46:42.448152   55100 logs.go:123] Gathering logs for Docker ...
	I0717 13:46:42.448159   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 13:46:42.463297   55100 logs.go:123] Gathering logs for container status ...
	I0717 13:46:42.463311   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 13:46:45.014581   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:46:45.026724   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 13:46:45.047354   55100 logs.go:284] 0 containers: []
	W0717 13:46:45.047373   55100 logs.go:286] No container was found matching "kube-apiserver"
	I0717 13:46:45.047461   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 13:46:45.073712   55100 logs.go:284] 0 containers: []
	W0717 13:46:45.073727   55100 logs.go:286] No container was found matching "etcd"
	I0717 13:46:45.073799   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 13:46:45.092920   55100 logs.go:284] 0 containers: []
	W0717 13:46:45.092933   55100 logs.go:286] No container was found matching "coredns"
	I0717 13:46:45.093007   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 13:46:45.111480   55100 logs.go:284] 0 containers: []
	W0717 13:46:45.111497   55100 logs.go:286] No container was found matching "kube-scheduler"
	I0717 13:46:45.111573   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 13:46:45.150808   55100 logs.go:284] 0 containers: []
	W0717 13:46:45.150824   55100 logs.go:286] No container was found matching "kube-proxy"
	I0717 13:46:45.150914   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 13:46:45.174738   55100 logs.go:284] 0 containers: []
	W0717 13:46:45.174751   55100 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 13:46:45.174818   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 13:46:45.193307   55100 logs.go:284] 0 containers: []
	W0717 13:46:45.193320   55100 logs.go:286] No container was found matching "kindnet"
	I0717 13:46:45.193389   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 13:46:45.211938   55100 logs.go:284] 0 containers: []
	W0717 13:46:45.211951   55100 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 13:46:45.211957   55100 logs.go:123] Gathering logs for kubelet ...
	I0717 13:46:45.211965   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 13:46:45.258968   55100 logs.go:123] Gathering logs for dmesg ...
	I0717 13:46:45.258989   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 13:46:45.275955   55100 logs.go:123] Gathering logs for describe nodes ...
	I0717 13:46:45.275971   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 13:46:45.334126   55100 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 13:46:45.334138   55100 logs.go:123] Gathering logs for Docker ...
	I0717 13:46:45.334146   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 13:46:45.352541   55100 logs.go:123] Gathering logs for container status ...
	I0717 13:46:45.352556   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 13:46:44.424707   55618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:46:44.924960   55618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:46:44.937843   55618 api_server.go:72] duration metric: took 1.076892708s to wait for apiserver process to appear ...
	I0717 13:46:44.937861   55618 api_server.go:88] waiting for apiserver healthz status ...
	I0717 13:46:44.937884   55618 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:59519/healthz ...
	I0717 13:46:47.083785   55618 api_server.go:279] https://127.0.0.1:59519/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 13:46:47.083810   55618 api_server.go:103] status: https://127.0.0.1:59519/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 13:46:47.583888   55618 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:59519/healthz ...
	I0717 13:46:47.590612   55618 api_server.go:279] https://127.0.0.1:59519/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0717 13:46:47.590651   55618 api_server.go:103] status: https://127.0.0.1:59519/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0717 13:46:48.084211   55618 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:59519/healthz ...
	I0717 13:46:48.089640   55618 api_server.go:279] https://127.0.0.1:59519/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0717 13:46:48.089667   55618 api_server.go:103] status: https://127.0.0.1:59519/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0717 13:46:48.584184   55618 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:59519/healthz ...
	I0717 13:46:48.620981   55618 api_server.go:279] https://127.0.0.1:59519/healthz returned 200:
	ok
	I0717 13:46:48.641309   55618 api_server.go:141] control plane version: v1.27.3
	I0717 13:46:48.641336   55618 api_server.go:131] duration metric: took 3.703473915s to wait for apiserver health ...
	I0717 13:46:48.641348   55618 cni.go:84] Creating CNI manager for ""
	I0717 13:46:48.641364   55618 cni.go:149] "docker" driver + "docker" runtime found, recommending kindnet
	I0717 13:46:48.681951   55618 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0717 13:46:48.704936   55618 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0717 13:46:48.719631   55618 cni.go:188] applying CNI manifest using /var/lib/minikube/binaries/v1.27.3/kubectl ...
	I0717 13:46:48.719647   55618 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0717 13:46:48.742320   55618 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0717 13:46:47.907286   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:46:47.919090   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 13:46:47.937263   55100 logs.go:284] 0 containers: []
	W0717 13:46:47.937277   55100 logs.go:286] No container was found matching "kube-apiserver"
	I0717 13:46:47.937346   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 13:46:47.955749   55100 logs.go:284] 0 containers: []
	W0717 13:46:47.955762   55100 logs.go:286] No container was found matching "etcd"
	I0717 13:46:47.955828   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 13:46:47.975098   55100 logs.go:284] 0 containers: []
	W0717 13:46:47.975111   55100 logs.go:286] No container was found matching "coredns"
	I0717 13:46:47.975178   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 13:46:47.994379   55100 logs.go:284] 0 containers: []
	W0717 13:46:47.994393   55100 logs.go:286] No container was found matching "kube-scheduler"
	I0717 13:46:47.994462   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 13:46:48.014164   55100 logs.go:284] 0 containers: []
	W0717 13:46:48.014177   55100 logs.go:286] No container was found matching "kube-proxy"
	I0717 13:46:48.014247   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 13:46:48.032911   55100 logs.go:284] 0 containers: []
	W0717 13:46:48.032924   55100 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 13:46:48.032990   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 13:46:48.054982   55100 logs.go:284] 0 containers: []
	W0717 13:46:48.054997   55100 logs.go:286] No container was found matching "kindnet"
	I0717 13:46:48.055069   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 13:46:48.073913   55100 logs.go:284] 0 containers: []
	W0717 13:46:48.073927   55100 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 13:46:48.073934   55100 logs.go:123] Gathering logs for kubelet ...
	I0717 13:46:48.073942   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 13:46:48.117415   55100 logs.go:123] Gathering logs for dmesg ...
	I0717 13:46:48.117441   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 13:46:48.133803   55100 logs.go:123] Gathering logs for describe nodes ...
	I0717 13:46:48.133819   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 13:46:48.202577   55100 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 13:46:48.202591   55100 logs.go:123] Gathering logs for Docker ...
	I0717 13:46:48.202598   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 13:46:48.218936   55100 logs.go:123] Gathering logs for container status ...
	I0717 13:46:48.218955   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 13:46:50.783234   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:46:50.795692   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 13:46:50.814941   55100 logs.go:284] 0 containers: []
	W0717 13:46:50.814954   55100 logs.go:286] No container was found matching "kube-apiserver"
	I0717 13:46:50.815018   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 13:46:50.834190   55100 logs.go:284] 0 containers: []
	W0717 13:46:50.834203   55100 logs.go:286] No container was found matching "etcd"
	I0717 13:46:50.834272   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 13:46:50.853540   55100 logs.go:284] 0 containers: []
	W0717 13:46:50.853552   55100 logs.go:286] No container was found matching "coredns"
	I0717 13:46:50.853617   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 13:46:50.872552   55100 logs.go:284] 0 containers: []
	W0717 13:46:50.872566   55100 logs.go:286] No container was found matching "kube-scheduler"
	I0717 13:46:50.872644   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 13:46:50.893181   55100 logs.go:284] 0 containers: []
	W0717 13:46:50.893193   55100 logs.go:286] No container was found matching "kube-proxy"
	I0717 13:46:50.893262   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 13:46:50.911344   55100 logs.go:284] 0 containers: []
	W0717 13:46:50.911356   55100 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 13:46:50.911426   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 13:46:50.930944   55100 logs.go:284] 0 containers: []
	W0717 13:46:50.930956   55100 logs.go:286] No container was found matching "kindnet"
	I0717 13:46:50.931024   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 13:46:50.950885   55100 logs.go:284] 0 containers: []
	W0717 13:46:50.950899   55100 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 13:46:50.950906   55100 logs.go:123] Gathering logs for kubelet ...
	I0717 13:46:50.950913   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 13:46:50.990058   55100 logs.go:123] Gathering logs for dmesg ...
	I0717 13:46:50.990072   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 13:46:51.003840   55100 logs.go:123] Gathering logs for describe nodes ...
	I0717 13:46:51.003855   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 13:46:51.059083   55100 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 13:46:51.059095   55100 logs.go:123] Gathering logs for Docker ...
	I0717 13:46:51.059102   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 13:46:51.074401   55100 logs.go:123] Gathering logs for container status ...
	I0717 13:46:51.074416   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 13:46:49.718301   55618 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 13:46:49.725861   55618 system_pods.go:59] 9 kube-system pods found
	I0717 13:46:49.725880   55618 system_pods.go:61] "coredns-5d78c9869d-mh285" [d620c8b5-cd3a-4c17-b677-8bb2572ebdc8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0717 13:46:49.725888   55618 system_pods.go:61] "etcd-embed-certs-688000" [7e05f969-d1d9-4c5b-98b0-713ae5a55b7e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0717 13:46:49.725894   55618 system_pods.go:61] "kindnet-4j4tm" [184dbbe0-fa67-4f15-b6e0-3008f9edbc20] Running
	I0717 13:46:49.725903   55618 system_pods.go:61] "kube-apiserver-embed-certs-688000" [ef276799-3868-406c-bb22-de5e6474952d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0717 13:46:49.725908   55618 system_pods.go:61] "kube-controller-manager-embed-certs-688000" [4229f61b-cc95-47e2-aeb8-c4bcc8e3bb77] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0717 13:46:49.725914   55618 system_pods.go:61] "kube-proxy-6nnvq" [6978b831-8165-43b0-bbd9-e36ba29e9e37] Running
	I0717 13:46:49.725921   55618 system_pods.go:61] "kube-scheduler-embed-certs-688000" [e2d17644-f122-48eb-9a1b-aa7295b550ce] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0717 13:46:49.725927   55618 system_pods.go:61] "metrics-server-74d5c6b9c-kfl8x" [2a17d526-933b-435a-93fc-5ed22776487a] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 13:46:49.725933   55618 system_pods.go:61] "storage-provisioner" [37bdcc61-c429-4886-8b23-b3dacc5a340a] Running
	I0717 13:46:49.725938   55618 system_pods.go:74] duration metric: took 7.625536ms to wait for pod list to return data ...
	I0717 13:46:49.725946   55618 node_conditions.go:102] verifying NodePressure condition ...
	I0717 13:46:49.729076   55618 node_conditions.go:122] node storage ephemeral capacity is 115273188Ki
	I0717 13:46:49.729091   55618 node_conditions.go:123] node cpu capacity is 6
	I0717 13:46:49.729100   55618 node_conditions.go:105] duration metric: took 3.148986ms to run NodePressure ...
	I0717 13:46:49.729112   55618 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 13:46:49.868051   55618 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0717 13:46:49.872378   55618 kubeadm.go:787] kubelet initialised
	I0717 13:46:49.872389   55618 kubeadm.go:788] duration metric: took 4.325656ms waiting for restarted kubelet to initialise ...
	I0717 13:46:49.872395   55618 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 13:46:49.878228   55618 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5d78c9869d-mh285" in "kube-system" namespace to be "Ready" ...
	I0717 13:46:51.890707   55618 pod_ready.go:102] pod "coredns-5d78c9869d-mh285" in "kube-system" namespace has status "Ready":"False"
	I0717 13:46:53.632601   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:46:53.644401   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 13:46:53.663313   55100 logs.go:284] 0 containers: []
	W0717 13:46:53.663325   55100 logs.go:286] No container was found matching "kube-apiserver"
	I0717 13:46:53.663392   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 13:46:53.682061   55100 logs.go:284] 0 containers: []
	W0717 13:46:53.682074   55100 logs.go:286] No container was found matching "etcd"
	I0717 13:46:53.682144   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 13:46:53.700859   55100 logs.go:284] 0 containers: []
	W0717 13:46:53.700873   55100 logs.go:286] No container was found matching "coredns"
	I0717 13:46:53.700950   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 13:46:53.719797   55100 logs.go:284] 0 containers: []
	W0717 13:46:53.719809   55100 logs.go:286] No container was found matching "kube-scheduler"
	I0717 13:46:53.719894   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 13:46:53.739949   55100 logs.go:284] 0 containers: []
	W0717 13:46:53.739960   55100 logs.go:286] No container was found matching "kube-proxy"
	I0717 13:46:53.740028   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 13:46:53.758410   55100 logs.go:284] 0 containers: []
	W0717 13:46:53.758423   55100 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 13:46:53.758492   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 13:46:53.776768   55100 logs.go:284] 0 containers: []
	W0717 13:46:53.776782   55100 logs.go:286] No container was found matching "kindnet"
	I0717 13:46:53.776850   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 13:46:53.796628   55100 logs.go:284] 0 containers: []
	W0717 13:46:53.796640   55100 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 13:46:53.796647   55100 logs.go:123] Gathering logs for kubelet ...
	I0717 13:46:53.796655   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 13:46:53.837266   55100 logs.go:123] Gathering logs for dmesg ...
	I0717 13:46:53.837280   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 13:46:53.851099   55100 logs.go:123] Gathering logs for describe nodes ...
	I0717 13:46:53.851114   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 13:46:53.907019   55100 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 13:46:53.907032   55100 logs.go:123] Gathering logs for Docker ...
	I0717 13:46:53.907042   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 13:46:53.922532   55100 logs.go:123] Gathering logs for container status ...
	I0717 13:46:53.922547   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 13:46:56.475713   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:46:56.488004   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 13:46:56.507404   55100 logs.go:284] 0 containers: []
	W0717 13:46:56.507418   55100 logs.go:286] No container was found matching "kube-apiserver"
	I0717 13:46:56.507489   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 13:46:56.526032   55100 logs.go:284] 0 containers: []
	W0717 13:46:56.526046   55100 logs.go:286] No container was found matching "etcd"
	I0717 13:46:56.526116   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 13:46:56.545355   55100 logs.go:284] 0 containers: []
	W0717 13:46:56.545369   55100 logs.go:286] No container was found matching "coredns"
	I0717 13:46:56.545437   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 13:46:56.564669   55100 logs.go:284] 0 containers: []
	W0717 13:46:56.564682   55100 logs.go:286] No container was found matching "kube-scheduler"
	I0717 13:46:56.564748   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 13:46:56.583394   55100 logs.go:284] 0 containers: []
	W0717 13:46:56.583408   55100 logs.go:286] No container was found matching "kube-proxy"
	I0717 13:46:56.583477   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 13:46:56.602907   55100 logs.go:284] 0 containers: []
	W0717 13:46:56.602920   55100 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 13:46:56.602989   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 13:46:56.621333   55100 logs.go:284] 0 containers: []
	W0717 13:46:56.621346   55100 logs.go:286] No container was found matching "kindnet"
	I0717 13:46:56.621415   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 13:46:56.640391   55100 logs.go:284] 0 containers: []
	W0717 13:46:56.640404   55100 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 13:46:56.640411   55100 logs.go:123] Gathering logs for kubelet ...
	I0717 13:46:56.640420   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 13:46:56.680506   55100 logs.go:123] Gathering logs for dmesg ...
	I0717 13:46:56.680521   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 13:46:56.694403   55100 logs.go:123] Gathering logs for describe nodes ...
	I0717 13:46:56.694417   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 13:46:56.752434   55100 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 13:46:56.752446   55100 logs.go:123] Gathering logs for Docker ...
	I0717 13:46:56.752452   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 13:46:56.767944   55100 logs.go:123] Gathering logs for container status ...
	I0717 13:46:56.767961   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 13:46:54.390658   55618 pod_ready.go:102] pod "coredns-5d78c9869d-mh285" in "kube-system" namespace has status "Ready":"False"
	I0717 13:46:56.391946   55618 pod_ready.go:102] pod "coredns-5d78c9869d-mh285" in "kube-system" namespace has status "Ready":"False"
	I0717 13:46:58.892166   55618 pod_ready.go:102] pod "coredns-5d78c9869d-mh285" in "kube-system" namespace has status "Ready":"False"
	I0717 13:46:59.318252   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:46:59.329327   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 13:46:59.349645   55100 logs.go:284] 0 containers: []
	W0717 13:46:59.349658   55100 logs.go:286] No container was found matching "kube-apiserver"
	I0717 13:46:59.349728   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 13:46:59.368639   55100 logs.go:284] 0 containers: []
	W0717 13:46:59.368654   55100 logs.go:286] No container was found matching "etcd"
	I0717 13:46:59.368734   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 13:46:59.389205   55100 logs.go:284] 0 containers: []
	W0717 13:46:59.389220   55100 logs.go:286] No container was found matching "coredns"
	I0717 13:46:59.389288   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 13:46:59.431811   55100 logs.go:284] 0 containers: []
	W0717 13:46:59.431827   55100 logs.go:286] No container was found matching "kube-scheduler"
	I0717 13:46:59.431906   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 13:46:59.451365   55100 logs.go:284] 0 containers: []
	W0717 13:46:59.451379   55100 logs.go:286] No container was found matching "kube-proxy"
	I0717 13:46:59.451446   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 13:46:59.470875   55100 logs.go:284] 0 containers: []
	W0717 13:46:59.470887   55100 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 13:46:59.470954   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 13:46:59.489798   55100 logs.go:284] 0 containers: []
	W0717 13:46:59.489810   55100 logs.go:286] No container was found matching "kindnet"
	I0717 13:46:59.489884   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 13:46:59.510384   55100 logs.go:284] 0 containers: []
	W0717 13:46:59.510397   55100 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 13:46:59.510404   55100 logs.go:123] Gathering logs for Docker ...
	I0717 13:46:59.510418   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 13:46:59.525730   55100 logs.go:123] Gathering logs for container status ...
	I0717 13:46:59.525744   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 13:46:59.578568   55100 logs.go:123] Gathering logs for kubelet ...
	I0717 13:46:59.578601   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 13:46:59.616292   55100 logs.go:123] Gathering logs for dmesg ...
	I0717 13:46:59.616307   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 13:46:59.630333   55100 logs.go:123] Gathering logs for describe nodes ...
	I0717 13:46:59.630348   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 13:46:59.685921   55100 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 13:47:01.391832   55618 pod_ready.go:102] pod "coredns-5d78c9869d-mh285" in "kube-system" namespace has status "Ready":"False"
	I0717 13:47:03.889574   55618 pod_ready.go:102] pod "coredns-5d78c9869d-mh285" in "kube-system" namespace has status "Ready":"False"
	I0717 13:47:02.186339   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:47:02.198584   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 13:47:02.218089   55100 logs.go:284] 0 containers: []
	W0717 13:47:02.218102   55100 logs.go:286] No container was found matching "kube-apiserver"
	I0717 13:47:02.218175   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 13:47:02.238035   55100 logs.go:284] 0 containers: []
	W0717 13:47:02.238047   55100 logs.go:286] No container was found matching "etcd"
	I0717 13:47:02.238114   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 13:47:02.257324   55100 logs.go:284] 0 containers: []
	W0717 13:47:02.257337   55100 logs.go:286] No container was found matching "coredns"
	I0717 13:47:02.257407   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 13:47:02.276226   55100 logs.go:284] 0 containers: []
	W0717 13:47:02.276240   55100 logs.go:286] No container was found matching "kube-scheduler"
	I0717 13:47:02.276308   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 13:47:02.294416   55100 logs.go:284] 0 containers: []
	W0717 13:47:02.294429   55100 logs.go:286] No container was found matching "kube-proxy"
	I0717 13:47:02.294498   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 13:47:02.312683   55100 logs.go:284] 0 containers: []
	W0717 13:47:02.312696   55100 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 13:47:02.312764   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 13:47:02.331778   55100 logs.go:284] 0 containers: []
	W0717 13:47:02.331792   55100 logs.go:286] No container was found matching "kindnet"
	I0717 13:47:02.331871   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 13:47:02.352579   55100 logs.go:284] 0 containers: []
	W0717 13:47:02.352594   55100 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 13:47:02.352605   55100 logs.go:123] Gathering logs for Docker ...
	I0717 13:47:02.352620   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 13:47:02.368586   55100 logs.go:123] Gathering logs for container status ...
	I0717 13:47:02.368600   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 13:47:02.448770   55100 logs.go:123] Gathering logs for kubelet ...
	I0717 13:47:02.448786   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 13:47:02.488527   55100 logs.go:123] Gathering logs for dmesg ...
	I0717 13:47:02.488542   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 13:47:02.502114   55100 logs.go:123] Gathering logs for describe nodes ...
	I0717 13:47:02.502132   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 13:47:02.558392   55100 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 13:47:05.058905   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:47:05.071919   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 13:47:05.092582   55100 logs.go:284] 0 containers: []
	W0717 13:47:05.092596   55100 logs.go:286] No container was found matching "kube-apiserver"
	I0717 13:47:05.092662   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 13:47:05.112186   55100 logs.go:284] 0 containers: []
	W0717 13:47:05.112198   55100 logs.go:286] No container was found matching "etcd"
	I0717 13:47:05.112258   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 13:47:05.131531   55100 logs.go:284] 0 containers: []
	W0717 13:47:05.131544   55100 logs.go:286] No container was found matching "coredns"
	I0717 13:47:05.131613   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 13:47:05.151709   55100 logs.go:284] 0 containers: []
	W0717 13:47:05.151723   55100 logs.go:286] No container was found matching "kube-scheduler"
	I0717 13:47:05.151796   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 13:47:05.170339   55100 logs.go:284] 0 containers: []
	W0717 13:47:05.170352   55100 logs.go:286] No container was found matching "kube-proxy"
	I0717 13:47:05.170421   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 13:47:05.189080   55100 logs.go:284] 0 containers: []
	W0717 13:47:05.189093   55100 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 13:47:05.189161   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 13:47:05.207970   55100 logs.go:284] 0 containers: []
	W0717 13:47:05.207984   55100 logs.go:286] No container was found matching "kindnet"
	I0717 13:47:05.208059   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 13:47:05.228166   55100 logs.go:284] 0 containers: []
	W0717 13:47:05.228179   55100 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 13:47:05.228187   55100 logs.go:123] Gathering logs for describe nodes ...
	I0717 13:47:05.228197   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 13:47:05.283322   55100 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 13:47:05.283335   55100 logs.go:123] Gathering logs for Docker ...
	I0717 13:47:05.283343   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 13:47:05.298351   55100 logs.go:123] Gathering logs for container status ...
	I0717 13:47:05.298364   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 13:47:05.351149   55100 logs.go:123] Gathering logs for kubelet ...
	I0717 13:47:05.351162   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 13:47:05.391898   55100 logs.go:123] Gathering logs for dmesg ...
	I0717 13:47:05.391914   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 13:47:05.891992   55618 pod_ready.go:102] pod "coredns-5d78c9869d-mh285" in "kube-system" namespace has status "Ready":"False"
	I0717 13:47:07.892008   55618 pod_ready.go:102] pod "coredns-5d78c9869d-mh285" in "kube-system" namespace has status "Ready":"False"
	I0717 13:47:07.921984   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:47:07.934568   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 13:47:07.953397   55100 logs.go:284] 0 containers: []
	W0717 13:47:07.953410   55100 logs.go:286] No container was found matching "kube-apiserver"
	I0717 13:47:07.953479   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 13:47:07.971818   55100 logs.go:284] 0 containers: []
	W0717 13:47:07.971835   55100 logs.go:286] No container was found matching "etcd"
	I0717 13:47:07.971911   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 13:47:07.990904   55100 logs.go:284] 0 containers: []
	W0717 13:47:07.990918   55100 logs.go:286] No container was found matching "coredns"
	I0717 13:47:07.990987   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 13:47:08.010644   55100 logs.go:284] 0 containers: []
	W0717 13:47:08.010658   55100 logs.go:286] No container was found matching "kube-scheduler"
	I0717 13:47:08.010729   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 13:47:08.029111   55100 logs.go:284] 0 containers: []
	W0717 13:47:08.029124   55100 logs.go:286] No container was found matching "kube-proxy"
	I0717 13:47:08.029192   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 13:47:08.047309   55100 logs.go:284] 0 containers: []
	W0717 13:47:08.047322   55100 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 13:47:08.047390   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 13:47:08.066438   55100 logs.go:284] 0 containers: []
	W0717 13:47:08.066451   55100 logs.go:286] No container was found matching "kindnet"
	I0717 13:47:08.066517   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 13:47:08.085170   55100 logs.go:284] 0 containers: []
	W0717 13:47:08.085183   55100 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 13:47:08.085190   55100 logs.go:123] Gathering logs for kubelet ...
	I0717 13:47:08.085196   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 13:47:08.124262   55100 logs.go:123] Gathering logs for dmesg ...
	I0717 13:47:08.124277   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 13:47:08.138215   55100 logs.go:123] Gathering logs for describe nodes ...
	I0717 13:47:08.138230   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 13:47:08.193927   55100 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 13:47:08.193941   55100 logs.go:123] Gathering logs for Docker ...
	I0717 13:47:08.193948   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 13:47:08.209306   55100 logs.go:123] Gathering logs for container status ...
	I0717 13:47:08.209319   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 13:47:10.763686   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:47:10.775925   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 13:47:10.795275   55100 logs.go:284] 0 containers: []
	W0717 13:47:10.795287   55100 logs.go:286] No container was found matching "kube-apiserver"
	I0717 13:47:10.795353   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 13:47:10.815531   55100 logs.go:284] 0 containers: []
	W0717 13:47:10.815543   55100 logs.go:286] No container was found matching "etcd"
	I0717 13:47:10.815609   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 13:47:10.834289   55100 logs.go:284] 0 containers: []
	W0717 13:47:10.834304   55100 logs.go:286] No container was found matching "coredns"
	I0717 13:47:10.834371   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 13:47:10.853799   55100 logs.go:284] 0 containers: []
	W0717 13:47:10.853816   55100 logs.go:286] No container was found matching "kube-scheduler"
	I0717 13:47:10.853888   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 13:47:10.872943   55100 logs.go:284] 0 containers: []
	W0717 13:47:10.872957   55100 logs.go:286] No container was found matching "kube-proxy"
	I0717 13:47:10.873025   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 13:47:10.891699   55100 logs.go:284] 0 containers: []
	W0717 13:47:10.891711   55100 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 13:47:10.891778   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 13:47:10.910894   55100 logs.go:284] 0 containers: []
	W0717 13:47:10.910910   55100 logs.go:286] No container was found matching "kindnet"
	I0717 13:47:10.910977   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 13:47:10.929696   55100 logs.go:284] 0 containers: []
	W0717 13:47:10.929708   55100 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 13:47:10.929715   55100 logs.go:123] Gathering logs for Docker ...
	I0717 13:47:10.929722   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 13:47:10.945281   55100 logs.go:123] Gathering logs for container status ...
	I0717 13:47:10.945295   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 13:47:10.996068   55100 logs.go:123] Gathering logs for kubelet ...
	I0717 13:47:10.996082   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 13:47:11.033766   55100 logs.go:123] Gathering logs for dmesg ...
	I0717 13:47:11.033779   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 13:47:11.047305   55100 logs.go:123] Gathering logs for describe nodes ...
	I0717 13:47:11.047318   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 13:47:11.101637   55100 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 13:47:10.391853   55618 pod_ready.go:102] pod "coredns-5d78c9869d-mh285" in "kube-system" namespace has status "Ready":"False"
	I0717 13:47:12.890562   55618 pod_ready.go:102] pod "coredns-5d78c9869d-mh285" in "kube-system" namespace has status "Ready":"False"
	I0717 13:47:13.601796   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:47:13.612718   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 13:47:13.632067   55100 logs.go:284] 0 containers: []
	W0717 13:47:13.632082   55100 logs.go:286] No container was found matching "kube-apiserver"
	I0717 13:47:13.632153   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 13:47:13.651331   55100 logs.go:284] 0 containers: []
	W0717 13:47:13.651343   55100 logs.go:286] No container was found matching "etcd"
	I0717 13:47:13.651418   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 13:47:13.670704   55100 logs.go:284] 0 containers: []
	W0717 13:47:13.670718   55100 logs.go:286] No container was found matching "coredns"
	I0717 13:47:13.670785   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 13:47:13.690983   55100 logs.go:284] 0 containers: []
	W0717 13:47:13.690996   55100 logs.go:286] No container was found matching "kube-scheduler"
	I0717 13:47:13.691069   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 13:47:13.709168   55100 logs.go:284] 0 containers: []
	W0717 13:47:13.709181   55100 logs.go:286] No container was found matching "kube-proxy"
	I0717 13:47:13.709250   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 13:47:13.727577   55100 logs.go:284] 0 containers: []
	W0717 13:47:13.727589   55100 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 13:47:13.727657   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 13:47:13.746154   55100 logs.go:284] 0 containers: []
	W0717 13:47:13.746167   55100 logs.go:286] No container was found matching "kindnet"
	I0717 13:47:13.746234   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 13:47:13.765361   55100 logs.go:284] 0 containers: []
	W0717 13:47:13.765374   55100 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 13:47:13.765380   55100 logs.go:123] Gathering logs for kubelet ...
	I0717 13:47:13.765388   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 13:47:13.803460   55100 logs.go:123] Gathering logs for dmesg ...
	I0717 13:47:13.803494   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 13:47:13.817359   55100 logs.go:123] Gathering logs for describe nodes ...
	I0717 13:47:13.817375   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 13:47:13.873383   55100 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 13:47:13.873400   55100 logs.go:123] Gathering logs for Docker ...
	I0717 13:47:13.873407   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 13:47:13.888915   55100 logs.go:123] Gathering logs for container status ...
	I0717 13:47:13.888930   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 13:47:16.444217   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:47:16.456612   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 13:47:16.476232   55100 logs.go:284] 0 containers: []
	W0717 13:47:16.476245   55100 logs.go:286] No container was found matching "kube-apiserver"
	I0717 13:47:16.476313   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 13:47:16.496486   55100 logs.go:284] 0 containers: []
	W0717 13:47:16.496499   55100 logs.go:286] No container was found matching "etcd"
	I0717 13:47:16.496566   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 13:47:16.516342   55100 logs.go:284] 0 containers: []
	W0717 13:47:16.516356   55100 logs.go:286] No container was found matching "coredns"
	I0717 13:47:16.516426   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 13:47:16.535211   55100 logs.go:284] 0 containers: []
	W0717 13:47:16.535224   55100 logs.go:286] No container was found matching "kube-scheduler"
	I0717 13:47:16.535292   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 13:47:16.554050   55100 logs.go:284] 0 containers: []
	W0717 13:47:16.554062   55100 logs.go:286] No container was found matching "kube-proxy"
	I0717 13:47:16.554127   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 13:47:16.575006   55100 logs.go:284] 0 containers: []
	W0717 13:47:16.575020   55100 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 13:47:16.575092   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 13:47:16.594029   55100 logs.go:284] 0 containers: []
	W0717 13:47:16.594045   55100 logs.go:286] No container was found matching "kindnet"
	I0717 13:47:16.594122   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 13:47:16.620392   55100 logs.go:284] 0 containers: []
	W0717 13:47:16.620406   55100 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 13:47:16.620413   55100 logs.go:123] Gathering logs for dmesg ...
	I0717 13:47:16.620421   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 13:47:16.635160   55100 logs.go:123] Gathering logs for describe nodes ...
	I0717 13:47:16.635176   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 13:47:16.690066   55100 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 13:47:16.690078   55100 logs.go:123] Gathering logs for Docker ...
	I0717 13:47:16.690086   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 13:47:16.705521   55100 logs.go:123] Gathering logs for container status ...
	I0717 13:47:16.705535   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 13:47:16.756981   55100 logs.go:123] Gathering logs for kubelet ...
	I0717 13:47:16.756995   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 13:47:14.892831   55618 pod_ready.go:102] pod "coredns-5d78c9869d-mh285" in "kube-system" namespace has status "Ready":"False"
	I0717 13:47:17.389974   55618 pod_ready.go:102] pod "coredns-5d78c9869d-mh285" in "kube-system" namespace has status "Ready":"False"
	I0717 13:47:19.296251   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:47:19.308319   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 13:47:19.327166   55100 logs.go:284] 0 containers: []
	W0717 13:47:19.327180   55100 logs.go:286] No container was found matching "kube-apiserver"
	I0717 13:47:19.327256   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 13:47:19.347315   55100 logs.go:284] 0 containers: []
	W0717 13:47:19.347328   55100 logs.go:286] No container was found matching "etcd"
	I0717 13:47:19.347399   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 13:47:19.367067   55100 logs.go:284] 0 containers: []
	W0717 13:47:19.367080   55100 logs.go:286] No container was found matching "coredns"
	I0717 13:47:19.367157   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 13:47:19.385788   55100 logs.go:284] 0 containers: []
	W0717 13:47:19.385808   55100 logs.go:286] No container was found matching "kube-scheduler"
	I0717 13:47:19.385895   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 13:47:19.406088   55100 logs.go:284] 0 containers: []
	W0717 13:47:19.406100   55100 logs.go:286] No container was found matching "kube-proxy"
	I0717 13:47:19.406170   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 13:47:19.426149   55100 logs.go:284] 0 containers: []
	W0717 13:47:19.426162   55100 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 13:47:19.426230   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 13:47:19.445038   55100 logs.go:284] 0 containers: []
	W0717 13:47:19.445052   55100 logs.go:286] No container was found matching "kindnet"
	I0717 13:47:19.445123   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 13:47:19.465191   55100 logs.go:284] 0 containers: []
	W0717 13:47:19.465204   55100 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 13:47:19.465210   55100 logs.go:123] Gathering logs for kubelet ...
	I0717 13:47:19.465218   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 13:47:19.503504   55100 logs.go:123] Gathering logs for dmesg ...
	I0717 13:47:19.503518   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 13:47:19.517847   55100 logs.go:123] Gathering logs for describe nodes ...
	I0717 13:47:19.517861   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 13:47:19.573393   55100 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 13:47:19.573405   55100 logs.go:123] Gathering logs for Docker ...
	I0717 13:47:19.573412   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 13:47:19.591562   55100 logs.go:123] Gathering logs for container status ...
	I0717 13:47:19.591580   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 13:47:19.892116   55618 pod_ready.go:102] pod "coredns-5d78c9869d-mh285" in "kube-system" namespace has status "Ready":"False"
	I0717 13:47:22.390504   55618 pod_ready.go:102] pod "coredns-5d78c9869d-mh285" in "kube-system" namespace has status "Ready":"False"
	I0717 13:47:22.148863   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:47:22.161400   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 13:47:22.180619   55100 logs.go:284] 0 containers: []
	W0717 13:47:22.180633   55100 logs.go:286] No container was found matching "kube-apiserver"
	I0717 13:47:22.180706   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 13:47:22.199346   55100 logs.go:284] 0 containers: []
	W0717 13:47:22.199361   55100 logs.go:286] No container was found matching "etcd"
	I0717 13:47:22.199437   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 13:47:22.218367   55100 logs.go:284] 0 containers: []
	W0717 13:47:22.218381   55100 logs.go:286] No container was found matching "coredns"
	I0717 13:47:22.218447   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 13:47:22.237539   55100 logs.go:284] 0 containers: []
	W0717 13:47:22.237553   55100 logs.go:286] No container was found matching "kube-scheduler"
	I0717 13:47:22.237624   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 13:47:22.255912   55100 logs.go:284] 0 containers: []
	W0717 13:47:22.255925   55100 logs.go:286] No container was found matching "kube-proxy"
	I0717 13:47:22.255992   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 13:47:22.276039   55100 logs.go:284] 0 containers: []
	W0717 13:47:22.276052   55100 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 13:47:22.276122   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 13:47:22.295376   55100 logs.go:284] 0 containers: []
	W0717 13:47:22.295389   55100 logs.go:286] No container was found matching "kindnet"
	I0717 13:47:22.295459   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 13:47:22.314632   55100 logs.go:284] 0 containers: []
	W0717 13:47:22.314644   55100 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 13:47:22.314651   55100 logs.go:123] Gathering logs for kubelet ...
	I0717 13:47:22.314657   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 13:47:22.353334   55100 logs.go:123] Gathering logs for dmesg ...
	I0717 13:47:22.353348   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 13:47:22.366919   55100 logs.go:123] Gathering logs for describe nodes ...
	I0717 13:47:22.366933   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 13:47:22.422708   55100 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 13:47:22.422722   55100 logs.go:123] Gathering logs for Docker ...
	I0717 13:47:22.422728   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 13:47:22.437853   55100 logs.go:123] Gathering logs for container status ...
	I0717 13:47:22.437866   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 13:47:24.989581   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:47:25.001542   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 13:47:25.021537   55100 logs.go:284] 0 containers: []
	W0717 13:47:25.021560   55100 logs.go:286] No container was found matching "kube-apiserver"
	I0717 13:47:25.021639   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 13:47:25.042600   55100 logs.go:284] 0 containers: []
	W0717 13:47:25.042613   55100 logs.go:286] No container was found matching "etcd"
	I0717 13:47:25.042685   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 13:47:25.062558   55100 logs.go:284] 0 containers: []
	W0717 13:47:25.062570   55100 logs.go:286] No container was found matching "coredns"
	I0717 13:47:25.062640   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 13:47:25.082697   55100 logs.go:284] 0 containers: []
	W0717 13:47:25.082710   55100 logs.go:286] No container was found matching "kube-scheduler"
	I0717 13:47:25.082777   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 13:47:25.103082   55100 logs.go:284] 0 containers: []
	W0717 13:47:25.103096   55100 logs.go:286] No container was found matching "kube-proxy"
	I0717 13:47:25.103167   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 13:47:25.123181   55100 logs.go:284] 0 containers: []
	W0717 13:47:25.123195   55100 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 13:47:25.123264   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 13:47:25.142316   55100 logs.go:284] 0 containers: []
	W0717 13:47:25.142328   55100 logs.go:286] No container was found matching "kindnet"
	I0717 13:47:25.142395   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 13:47:25.161215   55100 logs.go:284] 0 containers: []
	W0717 13:47:25.161228   55100 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 13:47:25.161235   55100 logs.go:123] Gathering logs for kubelet ...
	I0717 13:47:25.161242   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 13:47:25.201753   55100 logs.go:123] Gathering logs for dmesg ...
	I0717 13:47:25.201767   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 13:47:25.215525   55100 logs.go:123] Gathering logs for describe nodes ...
	I0717 13:47:25.215546   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 13:47:25.272017   55100 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 13:47:25.272030   55100 logs.go:123] Gathering logs for Docker ...
	I0717 13:47:25.272037   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 13:47:25.287328   55100 logs.go:123] Gathering logs for container status ...
	I0717 13:47:25.287341   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 13:47:24.391518   55618 pod_ready.go:102] pod "coredns-5d78c9869d-mh285" in "kube-system" namespace has status "Ready":"False"
	I0717 13:47:24.890047   55618 pod_ready.go:92] pod "coredns-5d78c9869d-mh285" in "kube-system" namespace has status "Ready":"True"
	I0717 13:47:24.890059   55618 pod_ready.go:81] duration metric: took 35.011884419s waiting for pod "coredns-5d78c9869d-mh285" in "kube-system" namespace to be "Ready" ...
	I0717 13:47:24.890066   55618 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-688000" in "kube-system" namespace to be "Ready" ...
	I0717 13:47:24.895434   55618 pod_ready.go:92] pod "etcd-embed-certs-688000" in "kube-system" namespace has status "Ready":"True"
	I0717 13:47:24.895444   55618 pod_ready.go:81] duration metric: took 5.361719ms waiting for pod "etcd-embed-certs-688000" in "kube-system" namespace to be "Ready" ...
	I0717 13:47:24.895453   55618 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-688000" in "kube-system" namespace to be "Ready" ...
	I0717 13:47:24.900535   55618 pod_ready.go:92] pod "kube-apiserver-embed-certs-688000" in "kube-system" namespace has status "Ready":"True"
	I0717 13:47:24.900543   55618 pod_ready.go:81] duration metric: took 5.084373ms waiting for pod "kube-apiserver-embed-certs-688000" in "kube-system" namespace to be "Ready" ...
	I0717 13:47:24.900549   55618 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-688000" in "kube-system" namespace to be "Ready" ...
	I0717 13:47:24.906141   55618 pod_ready.go:92] pod "kube-controller-manager-embed-certs-688000" in "kube-system" namespace has status "Ready":"True"
	I0717 13:47:24.906150   55618 pod_ready.go:81] duration metric: took 5.596887ms waiting for pod "kube-controller-manager-embed-certs-688000" in "kube-system" namespace to be "Ready" ...
	I0717 13:47:24.906159   55618 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-6nnvq" in "kube-system" namespace to be "Ready" ...
	I0717 13:47:24.911646   55618 pod_ready.go:92] pod "kube-proxy-6nnvq" in "kube-system" namespace has status "Ready":"True"
	I0717 13:47:24.911657   55618 pod_ready.go:81] duration metric: took 5.491934ms waiting for pod "kube-proxy-6nnvq" in "kube-system" namespace to be "Ready" ...
	I0717 13:47:24.911667   55618 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-688000" in "kube-system" namespace to be "Ready" ...
	I0717 13:47:25.287066   55618 pod_ready.go:92] pod "kube-scheduler-embed-certs-688000" in "kube-system" namespace has status "Ready":"True"
	I0717 13:47:25.287076   55618 pod_ready.go:81] duration metric: took 375.39555ms waiting for pod "kube-scheduler-embed-certs-688000" in "kube-system" namespace to be "Ready" ...
	I0717 13:47:25.287083   55618 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5c6b9c-kfl8x" in "kube-system" namespace to be "Ready" ...
	I0717 13:47:27.696354   55618 pod_ready.go:102] pod "metrics-server-74d5c6b9c-kfl8x" in "kube-system" namespace has status "Ready":"False"
	I0717 13:47:27.836677   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:47:27.847540   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 13:47:27.866716   55100 logs.go:284] 0 containers: []
	W0717 13:47:27.866730   55100 logs.go:286] No container was found matching "kube-apiserver"
	I0717 13:47:27.866809   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 13:47:27.885457   55100 logs.go:284] 0 containers: []
	W0717 13:47:27.885470   55100 logs.go:286] No container was found matching "etcd"
	I0717 13:47:27.885539   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 13:47:27.922785   55100 logs.go:284] 0 containers: []
	W0717 13:47:27.922800   55100 logs.go:286] No container was found matching "coredns"
	I0717 13:47:27.922882   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 13:47:27.942980   55100 logs.go:284] 0 containers: []
	W0717 13:47:27.942993   55100 logs.go:286] No container was found matching "kube-scheduler"
	I0717 13:47:27.943062   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 13:47:27.963494   55100 logs.go:284] 0 containers: []
	W0717 13:47:27.963506   55100 logs.go:286] No container was found matching "kube-proxy"
	I0717 13:47:27.963574   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 13:47:27.982995   55100 logs.go:284] 0 containers: []
	W0717 13:47:27.983009   55100 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 13:47:27.983078   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 13:47:28.002153   55100 logs.go:284] 0 containers: []
	W0717 13:47:28.002166   55100 logs.go:286] No container was found matching "kindnet"
	I0717 13:47:28.002235   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 13:47:28.020616   55100 logs.go:284] 0 containers: []
	W0717 13:47:28.020629   55100 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 13:47:28.020644   55100 logs.go:123] Gathering logs for kubelet ...
	I0717 13:47:28.020652   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 13:47:28.060325   55100 logs.go:123] Gathering logs for dmesg ...
	I0717 13:47:28.060340   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 13:47:28.074172   55100 logs.go:123] Gathering logs for describe nodes ...
	I0717 13:47:28.074186   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 13:47:28.130519   55100 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 13:47:28.130531   55100 logs.go:123] Gathering logs for Docker ...
	I0717 13:47:28.130537   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 13:47:28.146358   55100 logs.go:123] Gathering logs for container status ...
	I0717 13:47:28.146373   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 13:47:30.697853   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:47:30.708125   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 13:47:30.727388   55100 logs.go:284] 0 containers: []
	W0717 13:47:30.727401   55100 logs.go:286] No container was found matching "kube-apiserver"
	I0717 13:47:30.727469   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 13:47:30.746427   55100 logs.go:284] 0 containers: []
	W0717 13:47:30.746441   55100 logs.go:286] No container was found matching "etcd"
	I0717 13:47:30.746507   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 13:47:30.765156   55100 logs.go:284] 0 containers: []
	W0717 13:47:30.765169   55100 logs.go:286] No container was found matching "coredns"
	I0717 13:47:30.765237   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 13:47:30.785503   55100 logs.go:284] 0 containers: []
	W0717 13:47:30.785516   55100 logs.go:286] No container was found matching "kube-scheduler"
	I0717 13:47:30.785586   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 13:47:30.804761   55100 logs.go:284] 0 containers: []
	W0717 13:47:30.804775   55100 logs.go:286] No container was found matching "kube-proxy"
	I0717 13:47:30.804857   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 13:47:30.825573   55100 logs.go:284] 0 containers: []
	W0717 13:47:30.825588   55100 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 13:47:30.825659   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 13:47:30.844650   55100 logs.go:284] 0 containers: []
	W0717 13:47:30.844665   55100 logs.go:286] No container was found matching "kindnet"
	I0717 13:47:30.844741   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 13:47:30.865441   55100 logs.go:284] 0 containers: []
	W0717 13:47:30.865456   55100 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 13:47:30.865463   55100 logs.go:123] Gathering logs for dmesg ...
	I0717 13:47:30.865470   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 13:47:30.879344   55100 logs.go:123] Gathering logs for describe nodes ...
	I0717 13:47:30.879359   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 13:47:30.965310   55100 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 13:47:30.965325   55100 logs.go:123] Gathering logs for Docker ...
	I0717 13:47:30.965332   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 13:47:30.980915   55100 logs.go:123] Gathering logs for container status ...
	I0717 13:47:30.980928   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 13:47:31.031140   55100 logs.go:123] Gathering logs for kubelet ...
	I0717 13:47:31.031155   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 13:47:30.196056   55618 pod_ready.go:102] pod "metrics-server-74d5c6b9c-kfl8x" in "kube-system" namespace has status "Ready":"False"
	I0717 13:47:32.196820   55618 pod_ready.go:102] pod "metrics-server-74d5c6b9c-kfl8x" in "kube-system" namespace has status "Ready":"False"
	I0717 13:47:33.572773   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:47:33.585436   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 13:47:33.607653   55100 logs.go:284] 0 containers: []
	W0717 13:47:33.607666   55100 logs.go:286] No container was found matching "kube-apiserver"
	I0717 13:47:33.607735   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 13:47:33.627728   55100 logs.go:284] 0 containers: []
	W0717 13:47:33.627740   55100 logs.go:286] No container was found matching "etcd"
	I0717 13:47:33.627826   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 13:47:33.647218   55100 logs.go:284] 0 containers: []
	W0717 13:47:33.647231   55100 logs.go:286] No container was found matching "coredns"
	I0717 13:47:33.647297   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 13:47:33.666597   55100 logs.go:284] 0 containers: []
	W0717 13:47:33.666611   55100 logs.go:286] No container was found matching "kube-scheduler"
	I0717 13:47:33.666680   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 13:47:33.686422   55100 logs.go:284] 0 containers: []
	W0717 13:47:33.686436   55100 logs.go:286] No container was found matching "kube-proxy"
	I0717 13:47:33.686508   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 13:47:33.705892   55100 logs.go:284] 0 containers: []
	W0717 13:47:33.705906   55100 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 13:47:33.705971   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 13:47:33.724909   55100 logs.go:284] 0 containers: []
	W0717 13:47:33.724923   55100 logs.go:286] No container was found matching "kindnet"
	I0717 13:47:33.724991   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 13:47:33.743360   55100 logs.go:284] 0 containers: []
	W0717 13:47:33.743374   55100 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 13:47:33.743381   55100 logs.go:123] Gathering logs for dmesg ...
	I0717 13:47:33.743387   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 13:47:33.756773   55100 logs.go:123] Gathering logs for describe nodes ...
	I0717 13:47:33.756791   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 13:47:33.815777   55100 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 13:47:33.815795   55100 logs.go:123] Gathering logs for Docker ...
	I0717 13:47:33.815804   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 13:47:33.831711   55100 logs.go:123] Gathering logs for container status ...
	I0717 13:47:33.831726   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 13:47:33.881236   55100 logs.go:123] Gathering logs for kubelet ...
	I0717 13:47:33.881250   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 13:47:36.454919   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:47:36.467222   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 13:47:36.487204   55100 logs.go:284] 0 containers: []
	W0717 13:47:36.487217   55100 logs.go:286] No container was found matching "kube-apiserver"
	I0717 13:47:36.487283   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 13:47:36.506959   55100 logs.go:284] 0 containers: []
	W0717 13:47:36.506972   55100 logs.go:286] No container was found matching "etcd"
	I0717 13:47:36.507042   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 13:47:36.526848   55100 logs.go:284] 0 containers: []
	W0717 13:47:36.526863   55100 logs.go:286] No container was found matching "coredns"
	I0717 13:47:36.526930   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 13:47:36.545987   55100 logs.go:284] 0 containers: []
	W0717 13:47:36.546002   55100 logs.go:286] No container was found matching "kube-scheduler"
	I0717 13:47:36.546072   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 13:47:36.563988   55100 logs.go:284] 0 containers: []
	W0717 13:47:36.564001   55100 logs.go:286] No container was found matching "kube-proxy"
	I0717 13:47:36.564068   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 13:47:36.584342   55100 logs.go:284] 0 containers: []
	W0717 13:47:36.584355   55100 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 13:47:36.584424   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 13:47:36.603581   55100 logs.go:284] 0 containers: []
	W0717 13:47:36.603595   55100 logs.go:286] No container was found matching "kindnet"
	I0717 13:47:36.603663   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 13:47:36.622318   55100 logs.go:284] 0 containers: []
	W0717 13:47:36.622332   55100 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 13:47:36.622347   55100 logs.go:123] Gathering logs for describe nodes ...
	I0717 13:47:36.622354   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 13:47:36.676988   55100 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 13:47:36.677003   55100 logs.go:123] Gathering logs for Docker ...
	I0717 13:47:36.677009   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 13:47:36.692396   55100 logs.go:123] Gathering logs for container status ...
	I0717 13:47:36.692411   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 13:47:36.743621   55100 logs.go:123] Gathering logs for kubelet ...
	I0717 13:47:36.743636   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 13:47:36.783118   55100 logs.go:123] Gathering logs for dmesg ...
	I0717 13:47:36.783133   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 13:47:34.697368   55618 pod_ready.go:102] pod "metrics-server-74d5c6b9c-kfl8x" in "kube-system" namespace has status "Ready":"False"
	I0717 13:47:37.196710   55618 pod_ready.go:102] pod "metrics-server-74d5c6b9c-kfl8x" in "kube-system" namespace has status "Ready":"False"
	I0717 13:47:39.298234   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:47:39.310301   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 13:47:39.330761   55100 logs.go:284] 0 containers: []
	W0717 13:47:39.330777   55100 logs.go:286] No container was found matching "kube-apiserver"
	I0717 13:47:39.330894   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 13:47:39.357868   55100 logs.go:284] 0 containers: []
	W0717 13:47:39.357881   55100 logs.go:286] No container was found matching "etcd"
	I0717 13:47:39.357945   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 13:47:39.377341   55100 logs.go:284] 0 containers: []
	W0717 13:47:39.377354   55100 logs.go:286] No container was found matching "coredns"
	I0717 13:47:39.377424   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 13:47:39.398362   55100 logs.go:284] 0 containers: []
	W0717 13:47:39.398376   55100 logs.go:286] No container was found matching "kube-scheduler"
	I0717 13:47:39.398443   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 13:47:39.417859   55100 logs.go:284] 0 containers: []
	W0717 13:47:39.417873   55100 logs.go:286] No container was found matching "kube-proxy"
	I0717 13:47:39.417942   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 13:47:39.436550   55100 logs.go:284] 0 containers: []
	W0717 13:47:39.436563   55100 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 13:47:39.436632   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 13:47:39.456439   55100 logs.go:284] 0 containers: []
	W0717 13:47:39.456452   55100 logs.go:286] No container was found matching "kindnet"
	I0717 13:47:39.456520   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 13:47:39.476275   55100 logs.go:284] 0 containers: []
	W0717 13:47:39.476287   55100 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 13:47:39.476295   55100 logs.go:123] Gathering logs for container status ...
	I0717 13:47:39.476302   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 13:47:39.526849   55100 logs.go:123] Gathering logs for kubelet ...
	I0717 13:47:39.526864   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 13:47:39.565598   55100 logs.go:123] Gathering logs for dmesg ...
	I0717 13:47:39.565612   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 13:47:39.579269   55100 logs.go:123] Gathering logs for describe nodes ...
	I0717 13:47:39.579282   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 13:47:39.634379   55100 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 13:47:39.634391   55100 logs.go:123] Gathering logs for Docker ...
	I0717 13:47:39.634397   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 13:47:39.696501   55618 pod_ready.go:102] pod "metrics-server-74d5c6b9c-kfl8x" in "kube-system" namespace has status "Ready":"False"
	I0717 13:47:41.696709   55618 pod_ready.go:102] pod "metrics-server-74d5c6b9c-kfl8x" in "kube-system" namespace has status "Ready":"False"
	I0717 13:47:44.195798   55618 pod_ready.go:102] pod "metrics-server-74d5c6b9c-kfl8x" in "kube-system" namespace has status "Ready":"False"
	I0717 13:47:42.150038   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:47:42.160474   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 13:47:42.179523   55100 logs.go:284] 0 containers: []
	W0717 13:47:42.179536   55100 logs.go:286] No container was found matching "kube-apiserver"
	I0717 13:47:42.179605   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 13:47:42.198920   55100 logs.go:284] 0 containers: []
	W0717 13:47:42.198933   55100 logs.go:286] No container was found matching "etcd"
	I0717 13:47:42.199001   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 13:47:42.218822   55100 logs.go:284] 0 containers: []
	W0717 13:47:42.218835   55100 logs.go:286] No container was found matching "coredns"
	I0717 13:47:42.218901   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 13:47:42.237752   55100 logs.go:284] 0 containers: []
	W0717 13:47:42.237775   55100 logs.go:286] No container was found matching "kube-scheduler"
	I0717 13:47:42.237851   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 13:47:42.256468   55100 logs.go:284] 0 containers: []
	W0717 13:47:42.256480   55100 logs.go:286] No container was found matching "kube-proxy"
	I0717 13:47:42.256550   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 13:47:42.275686   55100 logs.go:284] 0 containers: []
	W0717 13:47:42.275699   55100 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 13:47:42.275781   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 13:47:42.295412   55100 logs.go:284] 0 containers: []
	W0717 13:47:42.295425   55100 logs.go:286] No container was found matching "kindnet"
	I0717 13:47:42.295492   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 13:47:42.314287   55100 logs.go:284] 0 containers: []
	W0717 13:47:42.314301   55100 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 13:47:42.314308   55100 logs.go:123] Gathering logs for kubelet ...
	I0717 13:47:42.314315   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 13:47:42.352790   55100 logs.go:123] Gathering logs for dmesg ...
	I0717 13:47:42.352808   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 13:47:42.366656   55100 logs.go:123] Gathering logs for describe nodes ...
	I0717 13:47:42.366671   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 13:47:42.421856   55100 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 13:47:42.421869   55100 logs.go:123] Gathering logs for Docker ...
	I0717 13:47:42.421877   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 13:47:42.437205   55100 logs.go:123] Gathering logs for container status ...
	I0717 13:47:42.437218   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 13:47:44.988851   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:47:45.001040   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 13:47:45.020265   55100 logs.go:284] 0 containers: []
	W0717 13:47:45.020279   55100 logs.go:286] No container was found matching "kube-apiserver"
	I0717 13:47:45.020345   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 13:47:45.039574   55100 logs.go:284] 0 containers: []
	W0717 13:47:45.039585   55100 logs.go:286] No container was found matching "etcd"
	I0717 13:47:45.039654   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 13:47:45.059543   55100 logs.go:284] 0 containers: []
	W0717 13:47:45.059555   55100 logs.go:286] No container was found matching "coredns"
	I0717 13:47:45.059627   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 13:47:45.079185   55100 logs.go:284] 0 containers: []
	W0717 13:47:45.079201   55100 logs.go:286] No container was found matching "kube-scheduler"
	I0717 13:47:45.079272   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 13:47:45.099336   55100 logs.go:284] 0 containers: []
	W0717 13:47:45.099350   55100 logs.go:286] No container was found matching "kube-proxy"
	I0717 13:47:45.099418   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 13:47:45.129428   55100 logs.go:284] 0 containers: []
	W0717 13:47:45.129442   55100 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 13:47:45.129510   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 13:47:45.148801   55100 logs.go:284] 0 containers: []
	W0717 13:47:45.148814   55100 logs.go:286] No container was found matching "kindnet"
	I0717 13:47:45.148883   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 13:47:45.169017   55100 logs.go:284] 0 containers: []
	W0717 13:47:45.169030   55100 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 13:47:45.169036   55100 logs.go:123] Gathering logs for kubelet ...
	I0717 13:47:45.169044   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 13:47:45.209701   55100 logs.go:123] Gathering logs for dmesg ...
	I0717 13:47:45.209717   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 13:47:45.223643   55100 logs.go:123] Gathering logs for describe nodes ...
	I0717 13:47:45.223659   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 13:47:45.280660   55100 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 13:47:45.280672   55100 logs.go:123] Gathering logs for Docker ...
	I0717 13:47:45.280681   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 13:47:45.296216   55100 logs.go:123] Gathering logs for container status ...
	I0717 13:47:45.296229   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 13:47:47.847866   55100 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:47:47.858365   55100 kubeadm.go:640] restartCluster took 4m11.990420193s
	W0717 13:47:47.858405   55100 out.go:239] ! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	I0717 13:47:47.858439   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0717 13:47:48.273989   55100 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 13:47:48.285192   55100 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 13:47:48.294418   55100 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0717 13:47:48.294469   55100 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 13:47:48.303704   55100 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 13:47:48.303730   55100 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0717 13:47:48.354441   55100 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0717 13:47:48.354478   55100 kubeadm.go:322] [preflight] Running pre-flight checks
	I0717 13:47:48.602759   55100 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 13:47:48.602897   55100 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 13:47:48.602980   55100 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 13:47:48.781069   55100 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 13:47:48.781848   55100 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 13:47:48.788562   55100 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0717 13:47:48.859643   55100 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 13:47:46.195960   55618 pod_ready.go:102] pod "metrics-server-74d5c6b9c-kfl8x" in "kube-system" namespace has status "Ready":"False"
	I0717 13:47:48.196825   55618 pod_ready.go:102] pod "metrics-server-74d5c6b9c-kfl8x" in "kube-system" namespace has status "Ready":"False"
	I0717 13:47:48.901886   55100 out.go:204]   - Generating certificates and keys ...
	I0717 13:47:48.901966   55100 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0717 13:47:48.902058   55100 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0717 13:47:48.902134   55100 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0717 13:47:48.902227   55100 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0717 13:47:48.902287   55100 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0717 13:47:48.902340   55100 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0717 13:47:48.902414   55100 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0717 13:47:48.902469   55100 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0717 13:47:48.902534   55100 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0717 13:47:48.902618   55100 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0717 13:47:48.902649   55100 kubeadm.go:322] [certs] Using the existing "sa" key
	I0717 13:47:48.902716   55100 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 13:47:49.025416   55100 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 13:47:49.111913   55100 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 13:47:49.208454   55100 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 13:47:49.404382   55100 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 13:47:49.404829   55100 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 13:47:49.447284   55100 out.go:204]   - Booting up control plane ...
	I0717 13:47:49.447457   55100 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 13:47:49.447616   55100 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 13:47:49.447747   55100 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 13:47:49.447905   55100 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 13:47:49.448143   55100 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0717 13:47:50.695768   55618 pod_ready.go:102] pod "metrics-server-74d5c6b9c-kfl8x" in "kube-system" namespace has status "Ready":"False"
	I0717 13:47:52.697322   55618 pod_ready.go:102] pod "metrics-server-74d5c6b9c-kfl8x" in "kube-system" namespace has status "Ready":"False"
	I0717 13:47:54.697640   55618 pod_ready.go:102] pod "metrics-server-74d5c6b9c-kfl8x" in "kube-system" namespace has status "Ready":"False"
	I0717 13:47:57.197113   55618 pod_ready.go:102] pod "metrics-server-74d5c6b9c-kfl8x" in "kube-system" namespace has status "Ready":"False"
	I0717 13:47:59.694592   55618 pod_ready.go:102] pod "metrics-server-74d5c6b9c-kfl8x" in "kube-system" namespace has status "Ready":"False"
	I0717 13:48:02.196774   55618 pod_ready.go:102] pod "metrics-server-74d5c6b9c-kfl8x" in "kube-system" namespace has status "Ready":"False"
	I0717 13:48:04.695799   55618 pod_ready.go:102] pod "metrics-server-74d5c6b9c-kfl8x" in "kube-system" namespace has status "Ready":"False"
	I0717 13:48:06.696290   55618 pod_ready.go:102] pod "metrics-server-74d5c6b9c-kfl8x" in "kube-system" namespace has status "Ready":"False"
	I0717 13:48:09.196019   55618 pod_ready.go:102] pod "metrics-server-74d5c6b9c-kfl8x" in "kube-system" namespace has status "Ready":"False"
	I0717 13:48:11.196267   55618 pod_ready.go:102] pod "metrics-server-74d5c6b9c-kfl8x" in "kube-system" namespace has status "Ready":"False"
	I0717 13:48:13.695823   55618 pod_ready.go:102] pod "metrics-server-74d5c6b9c-kfl8x" in "kube-system" namespace has status "Ready":"False"
	I0717 13:48:15.696139   55618 pod_ready.go:102] pod "metrics-server-74d5c6b9c-kfl8x" in "kube-system" namespace has status "Ready":"False"
	I0717 13:48:18.194843   55618 pod_ready.go:102] pod "metrics-server-74d5c6b9c-kfl8x" in "kube-system" namespace has status "Ready":"False"
	I0717 13:48:20.695638   55618 pod_ready.go:102] pod "metrics-server-74d5c6b9c-kfl8x" in "kube-system" namespace has status "Ready":"False"
	I0717 13:48:22.695833   55618 pod_ready.go:102] pod "metrics-server-74d5c6b9c-kfl8x" in "kube-system" namespace has status "Ready":"False"
	I0717 13:48:24.695895   55618 pod_ready.go:102] pod "metrics-server-74d5c6b9c-kfl8x" in "kube-system" namespace has status "Ready":"False"
	I0717 13:48:27.195285   55618 pod_ready.go:102] pod "metrics-server-74d5c6b9c-kfl8x" in "kube-system" namespace has status "Ready":"False"
	I0717 13:48:29.413910   55100 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0717 13:48:29.414781   55100 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 13:48:29.415002   55100 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 13:48:29.696020   55618 pod_ready.go:102] pod "metrics-server-74d5c6b9c-kfl8x" in "kube-system" namespace has status "Ready":"False"
	I0717 13:48:31.696802   55618 pod_ready.go:102] pod "metrics-server-74d5c6b9c-kfl8x" in "kube-system" namespace has status "Ready":"False"
	I0717 13:48:33.720067   55618 pod_ready.go:102] pod "metrics-server-74d5c6b9c-kfl8x" in "kube-system" namespace has status "Ready":"False"
	I0717 13:48:34.416636   55100 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 13:48:34.416832   55100 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 13:48:36.195993   55618 pod_ready.go:102] pod "metrics-server-74d5c6b9c-kfl8x" in "kube-system" namespace has status "Ready":"False"
	I0717 13:48:38.196433   55618 pod_ready.go:102] pod "metrics-server-74d5c6b9c-kfl8x" in "kube-system" namespace has status "Ready":"False"
	I0717 13:48:40.695935   55618 pod_ready.go:102] pod "metrics-server-74d5c6b9c-kfl8x" in "kube-system" namespace has status "Ready":"False"
	I0717 13:48:43.196520   55618 pod_ready.go:102] pod "metrics-server-74d5c6b9c-kfl8x" in "kube-system" namespace has status "Ready":"False"
	I0717 13:48:44.417390   55100 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 13:48:44.417529   55100 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 13:48:45.196766   55618 pod_ready.go:102] pod "metrics-server-74d5c6b9c-kfl8x" in "kube-system" namespace has status "Ready":"False"
	I0717 13:48:47.196807   55618 pod_ready.go:102] pod "metrics-server-74d5c6b9c-kfl8x" in "kube-system" namespace has status "Ready":"False"
	I0717 13:48:49.695219   55618 pod_ready.go:102] pod "metrics-server-74d5c6b9c-kfl8x" in "kube-system" namespace has status "Ready":"False"
	I0717 13:48:51.695414   55618 pod_ready.go:102] pod "metrics-server-74d5c6b9c-kfl8x" in "kube-system" namespace has status "Ready":"False"
	I0717 13:48:54.196094   55618 pod_ready.go:102] pod "metrics-server-74d5c6b9c-kfl8x" in "kube-system" namespace has status "Ready":"False"
	I0717 13:48:56.196630   55618 pod_ready.go:102] pod "metrics-server-74d5c6b9c-kfl8x" in "kube-system" namespace has status "Ready":"False"
	I0717 13:48:58.696168   55618 pod_ready.go:102] pod "metrics-server-74d5c6b9c-kfl8x" in "kube-system" namespace has status "Ready":"False"
	I0717 13:49:00.696367   55618 pod_ready.go:102] pod "metrics-server-74d5c6b9c-kfl8x" in "kube-system" namespace has status "Ready":"False"
	I0717 13:49:02.696942   55618 pod_ready.go:102] pod "metrics-server-74d5c6b9c-kfl8x" in "kube-system" namespace has status "Ready":"False"
	I0717 13:49:04.419383   55100 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 13:49:04.419610   55100 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 13:49:05.195644   55618 pod_ready.go:102] pod "metrics-server-74d5c6b9c-kfl8x" in "kube-system" namespace has status "Ready":"False"
	I0717 13:49:07.695544   55618 pod_ready.go:102] pod "metrics-server-74d5c6b9c-kfl8x" in "kube-system" namespace has status "Ready":"False"
	I0717 13:49:09.695639   55618 pod_ready.go:102] pod "metrics-server-74d5c6b9c-kfl8x" in "kube-system" namespace has status "Ready":"False"
	I0717 13:49:11.695729   55618 pod_ready.go:102] pod "metrics-server-74d5c6b9c-kfl8x" in "kube-system" namespace has status "Ready":"False"
	I0717 13:49:13.696701   55618 pod_ready.go:102] pod "metrics-server-74d5c6b9c-kfl8x" in "kube-system" namespace has status "Ready":"False"
	I0717 13:49:16.195403   55618 pod_ready.go:102] pod "metrics-server-74d5c6b9c-kfl8x" in "kube-system" namespace has status "Ready":"False"
	I0717 13:49:18.196741   55618 pod_ready.go:102] pod "metrics-server-74d5c6b9c-kfl8x" in "kube-system" namespace has status "Ready":"False"
	I0717 13:49:20.197273   55618 pod_ready.go:102] pod "metrics-server-74d5c6b9c-kfl8x" in "kube-system" namespace has status "Ready":"False"
	I0717 13:49:22.694253   55618 pod_ready.go:102] pod "metrics-server-74d5c6b9c-kfl8x" in "kube-system" namespace has status "Ready":"False"
	I0717 13:49:24.696820   55618 pod_ready.go:102] pod "metrics-server-74d5c6b9c-kfl8x" in "kube-system" namespace has status "Ready":"False"
	I0717 13:49:27.195754   55618 pod_ready.go:102] pod "metrics-server-74d5c6b9c-kfl8x" in "kube-system" namespace has status "Ready":"False"
	I0717 13:49:29.197223   55618 pod_ready.go:102] pod "metrics-server-74d5c6b9c-kfl8x" in "kube-system" namespace has status "Ready":"False"
	I0717 13:49:31.198062   55618 pod_ready.go:102] pod "metrics-server-74d5c6b9c-kfl8x" in "kube-system" namespace has status "Ready":"False"
	I0717 13:49:33.696978   55618 pod_ready.go:102] pod "metrics-server-74d5c6b9c-kfl8x" in "kube-system" namespace has status "Ready":"False"
	I0717 13:49:36.196345   55618 pod_ready.go:102] pod "metrics-server-74d5c6b9c-kfl8x" in "kube-system" namespace has status "Ready":"False"
	I0717 13:49:38.694857   55618 pod_ready.go:102] pod "metrics-server-74d5c6b9c-kfl8x" in "kube-system" namespace has status "Ready":"False"
	I0717 13:49:40.696073   55618 pod_ready.go:102] pod "metrics-server-74d5c6b9c-kfl8x" in "kube-system" namespace has status "Ready":"False"
	I0717 13:49:42.696173   55618 pod_ready.go:102] pod "metrics-server-74d5c6b9c-kfl8x" in "kube-system" namespace has status "Ready":"False"
	I0717 13:49:44.421349   55100 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 13:49:44.421613   55100 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 13:49:44.421634   55100 kubeadm.go:322] 
	I0717 13:49:44.421682   55100 kubeadm.go:322] Unfortunately, an error has occurred:
	I0717 13:49:44.421723   55100 kubeadm.go:322] 	timed out waiting for the condition
	I0717 13:49:44.421729   55100 kubeadm.go:322] 
	I0717 13:49:44.421765   55100 kubeadm.go:322] This error is likely caused by:
	I0717 13:49:44.421814   55100 kubeadm.go:322] 	- The kubelet is not running
	I0717 13:49:44.421987   55100 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0717 13:49:44.422000   55100 kubeadm.go:322] 
	I0717 13:49:44.422128   55100 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0717 13:49:44.422203   55100 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0717 13:49:44.422254   55100 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0717 13:49:44.422269   55100 kubeadm.go:322] 
	I0717 13:49:44.422384   55100 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0717 13:49:44.422500   55100 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0717 13:49:44.422639   55100 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0717 13:49:44.422692   55100 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0717 13:49:44.422782   55100 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0717 13:49:44.422821   55100 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0717 13:49:44.424576   55100 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0717 13:49:44.424650   55100 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0717 13:49:44.424757   55100 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.4. Latest validated version: 18.09
	I0717 13:49:44.424843   55100 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 13:49:44.424913   55100 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0717 13:49:44.424974   55100 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	W0717 13:49:44.425034   55100 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.4. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0717 13:49:44.425064   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0717 13:49:44.837728   55100 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 13:49:44.849035   55100 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0717 13:49:44.849093   55100 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 13:49:44.857988   55100 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 13:49:44.858010   55100 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0717 13:49:44.910224   55100 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0717 13:49:44.910272   55100 kubeadm.go:322] [preflight] Running pre-flight checks
	I0717 13:49:45.153838   55100 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 13:49:45.153923   55100 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 13:49:45.153994   55100 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 13:49:45.335570   55100 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 13:49:45.336206   55100 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 13:49:45.342868   55100 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0717 13:49:45.409755   55100 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 13:49:45.431159   55100 out.go:204]   - Generating certificates and keys ...
	I0717 13:49:45.431228   55100 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0717 13:49:45.431308   55100 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0717 13:49:45.431407   55100 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0717 13:49:45.431475   55100 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0717 13:49:45.431558   55100 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0717 13:49:45.431631   55100 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0717 13:49:45.431698   55100 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0717 13:49:45.431776   55100 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0717 13:49:45.431835   55100 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0717 13:49:45.431911   55100 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0717 13:49:45.431967   55100 kubeadm.go:322] [certs] Using the existing "sa" key
	I0717 13:49:45.432031   55100 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 13:49:45.692593   55100 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 13:49:45.867599   55100 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 13:49:46.013236   55100 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 13:49:46.147579   55100 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 13:49:46.148127   55100 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 13:49:46.169678   55100 out.go:204]   - Booting up control plane ...
	I0717 13:49:46.169847   55100 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 13:49:46.170021   55100 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 13:49:46.170217   55100 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 13:49:46.170453   55100 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 13:49:46.170776   55100 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0717 13:49:45.193986   55618 pod_ready.go:102] pod "metrics-server-74d5c6b9c-kfl8x" in "kube-system" namespace has status "Ready":"False"
	I0717 13:49:47.195178   55618 pod_ready.go:102] pod "metrics-server-74d5c6b9c-kfl8x" in "kube-system" namespace has status "Ready":"False"
	I0717 13:49:49.195871   55618 pod_ready.go:102] pod "metrics-server-74d5c6b9c-kfl8x" in "kube-system" namespace has status "Ready":"False"
	I0717 13:49:51.696376   55618 pod_ready.go:102] pod "metrics-server-74d5c6b9c-kfl8x" in "kube-system" namespace has status "Ready":"False"
	I0717 13:49:54.196491   55618 pod_ready.go:102] pod "metrics-server-74d5c6b9c-kfl8x" in "kube-system" namespace has status "Ready":"False"
	I0717 13:49:56.695835   55618 pod_ready.go:102] pod "metrics-server-74d5c6b9c-kfl8x" in "kube-system" namespace has status "Ready":"False"
	I0717 13:49:58.696106   55618 pod_ready.go:102] pod "metrics-server-74d5c6b9c-kfl8x" in "kube-system" namespace has status "Ready":"False"
	I0717 13:50:01.195269   55618 pod_ready.go:102] pod "metrics-server-74d5c6b9c-kfl8x" in "kube-system" namespace has status "Ready":"False"
	I0717 13:50:03.196193   55618 pod_ready.go:102] pod "metrics-server-74d5c6b9c-kfl8x" in "kube-system" namespace has status "Ready":"False"
	I0717 13:50:05.695515   55618 pod_ready.go:102] pod "metrics-server-74d5c6b9c-kfl8x" in "kube-system" namespace has status "Ready":"False"
	I0717 13:50:07.695849   55618 pod_ready.go:102] pod "metrics-server-74d5c6b9c-kfl8x" in "kube-system" namespace has status "Ready":"False"
	I0717 13:50:10.197275   55618 pod_ready.go:102] pod "metrics-server-74d5c6b9c-kfl8x" in "kube-system" namespace has status "Ready":"False"
	I0717 13:50:12.696005   55618 pod_ready.go:102] pod "metrics-server-74d5c6b9c-kfl8x" in "kube-system" namespace has status "Ready":"False"
	I0717 13:50:15.195605   55618 pod_ready.go:102] pod "metrics-server-74d5c6b9c-kfl8x" in "kube-system" namespace has status "Ready":"False"
	I0717 13:50:17.696913   55618 pod_ready.go:102] pod "metrics-server-74d5c6b9c-kfl8x" in "kube-system" namespace has status "Ready":"False"
	I0717 13:50:20.195654   55618 pod_ready.go:102] pod "metrics-server-74d5c6b9c-kfl8x" in "kube-system" namespace has status "Ready":"False"
	I0717 13:50:22.695532   55618 pod_ready.go:102] pod "metrics-server-74d5c6b9c-kfl8x" in "kube-system" namespace has status "Ready":"False"
	I0717 13:50:26.155927   55100 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0717 13:50:26.156336   55100 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 13:50:26.156521   55100 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 13:50:24.696225   55618 pod_ready.go:102] pod "metrics-server-74d5c6b9c-kfl8x" in "kube-system" namespace has status "Ready":"False"
	I0717 13:50:27.196969   55618 pod_ready.go:102] pod "metrics-server-74d5c6b9c-kfl8x" in "kube-system" namespace has status "Ready":"False"
	I0717 13:50:31.158086   55100 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 13:50:31.158320   55100 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 13:50:29.694964   55618 pod_ready.go:102] pod "metrics-server-74d5c6b9c-kfl8x" in "kube-system" namespace has status "Ready":"False"
	I0717 13:50:31.696102   55618 pod_ready.go:102] pod "metrics-server-74d5c6b9c-kfl8x" in "kube-system" namespace has status "Ready":"False"
	I0717 13:50:34.196164   55618 pod_ready.go:102] pod "metrics-server-74d5c6b9c-kfl8x" in "kube-system" namespace has status "Ready":"False"
	I0717 13:50:36.695904   55618 pod_ready.go:102] pod "metrics-server-74d5c6b9c-kfl8x" in "kube-system" namespace has status "Ready":"False"
	I0717 13:50:39.195640   55618 pod_ready.go:102] pod "metrics-server-74d5c6b9c-kfl8x" in "kube-system" namespace has status "Ready":"False"
	I0717 13:50:41.158610   55100 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 13:50:41.158758   55100 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 13:50:41.697535   55618 pod_ready.go:102] pod "metrics-server-74d5c6b9c-kfl8x" in "kube-system" namespace has status "Ready":"False"
	I0717 13:50:44.195371   55618 pod_ready.go:102] pod "metrics-server-74d5c6b9c-kfl8x" in "kube-system" namespace has status "Ready":"False"
	I0717 13:50:46.695432   55618 pod_ready.go:102] pod "metrics-server-74d5c6b9c-kfl8x" in "kube-system" namespace has status "Ready":"False"
	I0717 13:50:49.194900   55618 pod_ready.go:102] pod "metrics-server-74d5c6b9c-kfl8x" in "kube-system" namespace has status "Ready":"False"
	I0717 13:50:51.196287   55618 pod_ready.go:102] pod "metrics-server-74d5c6b9c-kfl8x" in "kube-system" namespace has status "Ready":"False"
	I0717 13:50:53.696517   55618 pod_ready.go:102] pod "metrics-server-74d5c6b9c-kfl8x" in "kube-system" namespace has status "Ready":"False"
	I0717 13:50:56.194962   55618 pod_ready.go:102] pod "metrics-server-74d5c6b9c-kfl8x" in "kube-system" namespace has status "Ready":"False"
	I0717 13:50:58.195294   55618 pod_ready.go:102] pod "metrics-server-74d5c6b9c-kfl8x" in "kube-system" namespace has status "Ready":"False"
	I0717 13:51:01.160469   55100 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 13:51:01.160680   55100 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 13:51:00.195848   55618 pod_ready.go:102] pod "metrics-server-74d5c6b9c-kfl8x" in "kube-system" namespace has status "Ready":"False"
	I0717 13:51:02.695425   55618 pod_ready.go:102] pod "metrics-server-74d5c6b9c-kfl8x" in "kube-system" namespace has status "Ready":"False"
	I0717 13:51:04.695606   55618 pod_ready.go:102] pod "metrics-server-74d5c6b9c-kfl8x" in "kube-system" namespace has status "Ready":"False"
	I0717 13:51:07.196830   55618 pod_ready.go:102] pod "metrics-server-74d5c6b9c-kfl8x" in "kube-system" namespace has status "Ready":"False"
	I0717 13:51:09.695610   55618 pod_ready.go:102] pod "metrics-server-74d5c6b9c-kfl8x" in "kube-system" namespace has status "Ready":"False"
	I0717 13:51:11.696765   55618 pod_ready.go:102] pod "metrics-server-74d5c6b9c-kfl8x" in "kube-system" namespace has status "Ready":"False"
	I0717 13:51:14.195337   55618 pod_ready.go:102] pod "metrics-server-74d5c6b9c-kfl8x" in "kube-system" namespace has status "Ready":"False"
	I0717 13:51:16.695584   55618 pod_ready.go:102] pod "metrics-server-74d5c6b9c-kfl8x" in "kube-system" namespace has status "Ready":"False"
	I0717 13:51:19.197054   55618 pod_ready.go:102] pod "metrics-server-74d5c6b9c-kfl8x" in "kube-system" namespace has status "Ready":"False"
	I0717 13:51:21.695480   55618 pod_ready.go:102] pod "metrics-server-74d5c6b9c-kfl8x" in "kube-system" namespace has status "Ready":"False"
	I0717 13:51:24.196238   55618 pod_ready.go:102] pod "metrics-server-74d5c6b9c-kfl8x" in "kube-system" namespace has status "Ready":"False"
	I0717 13:51:25.288155   55618 pod_ready.go:81] duration metric: took 4m0.001477125s waiting for pod "metrics-server-74d5c6b9c-kfl8x" in "kube-system" namespace to be "Ready" ...
	E0717 13:51:25.288187   55618 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0717 13:51:25.288213   55618 pod_ready.go:38] duration metric: took 4m35.416318237s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 13:51:25.288240   55618 kubeadm.go:640] restartCluster took 4m52.554767713s
	W0717 13:51:25.288290   55618 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0717 13:51:25.288325   55618 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0717 13:51:31.780067   55618 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (6.491738511s)
	I0717 13:51:31.780146   55618 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 13:51:31.791925   55618 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 13:51:31.800811   55618 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0717 13:51:31.800866   55618 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 13:51:31.809710   55618 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 13:51:31.809742   55618 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0717 13:51:31.849683   55618 kubeadm.go:322] [init] Using Kubernetes version: v1.27.3
	I0717 13:51:31.849734   55618 kubeadm.go:322] [preflight] Running pre-flight checks
	I0717 13:51:31.971836   55618 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 13:51:31.971920   55618 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 13:51:31.971999   55618 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 13:51:32.253613   55618 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 13:51:32.274921   55618 out.go:204]   - Generating certificates and keys ...
	I0717 13:51:32.274989   55618 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0717 13:51:32.275055   55618 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0717 13:51:32.275132   55618 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0717 13:51:32.275200   55618 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0717 13:51:32.275262   55618 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0717 13:51:32.275336   55618 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0717 13:51:32.275407   55618 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0717 13:51:32.275488   55618 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0717 13:51:32.275575   55618 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0717 13:51:32.275674   55618 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0717 13:51:32.275712   55618 kubeadm.go:322] [certs] Using the existing "sa" key
	I0717 13:51:32.275762   55618 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 13:51:32.433945   55618 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 13:51:32.654273   55618 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 13:51:32.738575   55618 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 13:51:32.880101   55618 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 13:51:32.905147   55618 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 13:51:32.905879   55618 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 13:51:32.905931   55618 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0717 13:51:32.983645   55618 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 13:51:33.004797   55618 out.go:204]   - Booting up control plane ...
	I0717 13:51:33.004900   55618 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 13:51:33.004977   55618 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 13:51:33.005050   55618 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 13:51:33.005114   55618 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 13:51:33.005242   55618 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0717 13:51:37.992999   55618 kubeadm.go:322] [apiclient] All control plane components are healthy after 5.003635 seconds
	I0717 13:51:37.993156   55618 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0717 13:51:38.003398   55618 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0717 13:51:38.519802   55618 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0717 13:51:38.519961   55618 kubeadm.go:322] [mark-control-plane] Marking the node embed-certs-688000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0717 13:51:39.028371   55618 kubeadm.go:322] [bootstrap-token] Using token: 0mpt2y.g3hp5w80kw765hby
	I0717 13:51:39.067741   55618 out.go:204]   - Configuring RBAC rules ...
	I0717 13:51:39.067931   55618 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0717 13:51:39.108618   55618 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0717 13:51:39.114913   55618 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0717 13:51:39.117953   55618 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0717 13:51:39.120435   55618 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0717 13:51:39.123276   55618 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0717 13:51:39.132702   55618 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0717 13:51:39.283321   55618 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0717 13:51:39.520899   55618 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0717 13:51:39.521715   55618 kubeadm.go:322] 
	I0717 13:51:39.521796   55618 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0717 13:51:39.521806   55618 kubeadm.go:322] 
	I0717 13:51:39.521879   55618 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0717 13:51:39.521887   55618 kubeadm.go:322] 
	I0717 13:51:39.521910   55618 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0717 13:51:39.521981   55618 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0717 13:51:39.522071   55618 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0717 13:51:39.522084   55618 kubeadm.go:322] 
	I0717 13:51:39.522132   55618 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0717 13:51:39.522150   55618 kubeadm.go:322] 
	I0717 13:51:39.522200   55618 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0717 13:51:39.522205   55618 kubeadm.go:322] 
	I0717 13:51:39.522243   55618 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0717 13:51:39.522316   55618 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0717 13:51:39.522411   55618 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0717 13:51:39.522419   55618 kubeadm.go:322] 
	I0717 13:51:39.522526   55618 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0717 13:51:39.522585   55618 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0717 13:51:39.522593   55618 kubeadm.go:322] 
	I0717 13:51:39.522692   55618 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 0mpt2y.g3hp5w80kw765hby \
	I0717 13:51:39.522797   55618 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:0b7e38d80448c194e3e45f71aeab86c70d7b6050df812a832a1451b739743229 \
	I0717 13:51:39.522822   55618 kubeadm.go:322] 	--control-plane 
	I0717 13:51:39.522831   55618 kubeadm.go:322] 
	I0717 13:51:39.522921   55618 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0717 13:51:39.522927   55618 kubeadm.go:322] 
	I0717 13:51:39.523031   55618 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 0mpt2y.g3hp5w80kw765hby \
	I0717 13:51:39.523137   55618 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:0b7e38d80448c194e3e45f71aeab86c70d7b6050df812a832a1451b739743229 
	I0717 13:51:39.525435   55618 kubeadm.go:322] 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I0717 13:51:39.525548   55618 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 13:51:39.525565   55618 cni.go:84] Creating CNI manager for ""
	I0717 13:51:39.525586   55618 cni.go:149] "docker" driver + "docker" runtime found, recommending kindnet
	I0717 13:51:39.547379   55618 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0717 13:51:41.162096   55100 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 13:51:41.162311   55100 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 13:51:41.162327   55100 kubeadm.go:322] 
	I0717 13:51:41.162378   55100 kubeadm.go:322] Unfortunately, an error has occurred:
	I0717 13:51:41.162419   55100 kubeadm.go:322] 	timed out waiting for the condition
	I0717 13:51:41.162438   55100 kubeadm.go:322] 
	I0717 13:51:41.162488   55100 kubeadm.go:322] This error is likely caused by:
	I0717 13:51:41.162534   55100 kubeadm.go:322] 	- The kubelet is not running
	I0717 13:51:41.162702   55100 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0717 13:51:41.162718   55100 kubeadm.go:322] 
	I0717 13:51:41.162857   55100 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0717 13:51:41.162912   55100 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0717 13:51:41.162961   55100 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0717 13:51:41.162975   55100 kubeadm.go:322] 
	I0717 13:51:41.163090   55100 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0717 13:51:41.163209   55100 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0717 13:51:41.163311   55100 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0717 13:51:41.163371   55100 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0717 13:51:41.163464   55100 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0717 13:51:41.163508   55100 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0717 13:51:41.165287   55100 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0717 13:51:41.165357   55100 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0717 13:51:41.165476   55100 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.4. Latest validated version: 18.09
	I0717 13:51:41.165572   55100 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 13:51:41.165643   55100 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0717 13:51:41.165703   55100 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0717 13:51:41.165735   55100 kubeadm.go:406] StartCluster complete in 8m5.326883161s
	I0717 13:51:41.165833   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 13:51:41.184757   55100 logs.go:284] 0 containers: []
	W0717 13:51:41.184770   55100 logs.go:286] No container was found matching "kube-apiserver"
	I0717 13:51:41.184842   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 13:51:41.204553   55100 logs.go:284] 0 containers: []
	W0717 13:51:41.204565   55100 logs.go:286] No container was found matching "etcd"
	I0717 13:51:41.204631   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 13:51:41.223474   55100 logs.go:284] 0 containers: []
	W0717 13:51:41.223487   55100 logs.go:286] No container was found matching "coredns"
	I0717 13:51:41.223557   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 13:51:41.244430   55100 logs.go:284] 0 containers: []
	W0717 13:51:41.244444   55100 logs.go:286] No container was found matching "kube-scheduler"
	I0717 13:51:41.244517   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 13:51:41.264190   55100 logs.go:284] 0 containers: []
	W0717 13:51:41.264205   55100 logs.go:286] No container was found matching "kube-proxy"
	I0717 13:51:41.264275   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 13:51:41.283668   55100 logs.go:284] 0 containers: []
	W0717 13:51:41.283681   55100 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 13:51:41.283748   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 13:51:41.302922   55100 logs.go:284] 0 containers: []
	W0717 13:51:41.302936   55100 logs.go:286] No container was found matching "kindnet"
	I0717 13:51:41.303004   55100 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 13:51:41.321395   55100 logs.go:284] 0 containers: []
	W0717 13:51:41.321409   55100 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 13:51:41.321416   55100 logs.go:123] Gathering logs for Docker ...
	I0717 13:51:41.321423   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 13:51:41.336937   55100 logs.go:123] Gathering logs for container status ...
	I0717 13:51:41.336949   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 13:51:41.390739   55100 logs.go:123] Gathering logs for kubelet ...
	I0717 13:51:41.390753   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 13:51:41.431208   55100 logs.go:123] Gathering logs for dmesg ...
	I0717 13:51:41.431223   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 13:51:41.445292   55100 logs.go:123] Gathering logs for describe nodes ...
	I0717 13:51:41.445310   55100 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 13:51:41.500320   55100 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W0717 13:51:41.500378   55100 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.4. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0717 13:51:41.500400   55100 out.go:239] * 
	W0717 13:51:41.500438   55100 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.4. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0717 13:51:41.500453   55100 out.go:239] * 
	W0717 13:51:41.501086   55100 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 13:51:41.565607   55100 out.go:177] 
	W0717 13:51:41.628465   55100 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.4. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0717 13:51:41.628528   55100 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0717 13:51:41.628551   55100 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0717 13:51:41.649892   55100 out.go:177] 
	
	* 
	* ==> Docker <==
	* Jul 17 20:43:24 old-k8s-version-378000 dockerd[696]: time="2023-07-17T20:43:24.094039630Z" level=info msg="Loading containers: start."
	Jul 17 20:43:24 old-k8s-version-378000 dockerd[696]: time="2023-07-17T20:43:24.181177867Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jul 17 20:43:24 old-k8s-version-378000 dockerd[696]: time="2023-07-17T20:43:24.217420608Z" level=info msg="Loading containers: done."
	Jul 17 20:43:24 old-k8s-version-378000 dockerd[696]: time="2023-07-17T20:43:24.225759654Z" level=info msg="Docker daemon" commit=4ffc614 graphdriver=overlay2 version=24.0.4
	Jul 17 20:43:24 old-k8s-version-378000 dockerd[696]: time="2023-07-17T20:43:24.225825774Z" level=info msg="Daemon has completed initialization"
	Jul 17 20:43:24 old-k8s-version-378000 dockerd[696]: time="2023-07-17T20:43:24.255125966Z" level=info msg="API listen on [::]:2376"
	Jul 17 20:43:24 old-k8s-version-378000 dockerd[696]: time="2023-07-17T20:43:24.255164231Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 17 20:43:24 old-k8s-version-378000 systemd[1]: Started Docker Application Container Engine.
	Jul 17 20:43:31 old-k8s-version-378000 systemd[1]: Stopping Docker Application Container Engine...
	Jul 17 20:43:31 old-k8s-version-378000 dockerd[696]: time="2023-07-17T20:43:31.581664014Z" level=info msg="Processing signal 'terminated'"
	Jul 17 20:43:31 old-k8s-version-378000 dockerd[696]: time="2023-07-17T20:43:31.582662043Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 17 20:43:31 old-k8s-version-378000 dockerd[696]: time="2023-07-17T20:43:31.582919471Z" level=info msg="Daemon shutdown complete"
	Jul 17 20:43:31 old-k8s-version-378000 systemd[1]: docker.service: Deactivated successfully.
	Jul 17 20:43:31 old-k8s-version-378000 systemd[1]: Stopped Docker Application Container Engine.
	Jul 17 20:43:31 old-k8s-version-378000 systemd[1]: Starting Docker Application Container Engine...
	Jul 17 20:43:31 old-k8s-version-378000 dockerd[919]: time="2023-07-17T20:43:31.637367128Z" level=info msg="Starting up"
	Jul 17 20:43:31 old-k8s-version-378000 dockerd[919]: time="2023-07-17T20:43:31.768639802Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 17 20:43:31 old-k8s-version-378000 dockerd[919]: time="2023-07-17T20:43:31.930935601Z" level=info msg="Loading containers: start."
	Jul 17 20:43:32 old-k8s-version-378000 dockerd[919]: time="2023-07-17T20:43:32.076080810Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jul 17 20:43:32 old-k8s-version-378000 dockerd[919]: time="2023-07-17T20:43:32.150208855Z" level=info msg="Loading containers: done."
	Jul 17 20:43:32 old-k8s-version-378000 dockerd[919]: time="2023-07-17T20:43:32.158924505Z" level=info msg="Docker daemon" commit=4ffc614 graphdriver=overlay2 version=24.0.4
	Jul 17 20:43:32 old-k8s-version-378000 dockerd[919]: time="2023-07-17T20:43:32.158986647Z" level=info msg="Daemon has completed initialization"
	Jul 17 20:43:32 old-k8s-version-378000 dockerd[919]: time="2023-07-17T20:43:32.188593160Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 17 20:43:32 old-k8s-version-378000 dockerd[919]: time="2023-07-17T20:43:32.188664710Z" level=info msg="API listen on [::]:2376"
	Jul 17 20:43:32 old-k8s-version-378000 systemd[1]: Started Docker Application Container Engine.
	
	* 
	* ==> container status <==
	* CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
	time="2023-07-17T20:51:43Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/dockershim.sock: connect: no such file or directory\""
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> kernel <==
	*  20:51:43 up  4:49,  0 users,  load average: 0.90, 0.89, 1.09
	Linux old-k8s-version-378000 5.15.49-linuxkit-pr #1 SMP Thu May 25 07:17:40 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.2 LTS"
	
	* 
	* ==> kubelet <==
	* Jul 17 20:51:41 old-k8s-version-378000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jul 17 20:51:42 old-k8s-version-378000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 156.
	Jul 17 20:51:42 old-k8s-version-378000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jul 17 20:51:42 old-k8s-version-378000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jul 17 20:51:42 old-k8s-version-378000 kubelet[16770]: I0717 20:51:42.659267   16770 server.go:410] Version: v1.16.0
	Jul 17 20:51:42 old-k8s-version-378000 kubelet[16770]: I0717 20:51:42.659422   16770 plugins.go:100] No cloud provider specified.
	Jul 17 20:51:42 old-k8s-version-378000 kubelet[16770]: I0717 20:51:42.659431   16770 server.go:773] Client rotation is on, will bootstrap in background
	Jul 17 20:51:42 old-k8s-version-378000 kubelet[16770]: I0717 20:51:42.661217   16770 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jul 17 20:51:42 old-k8s-version-378000 kubelet[16770]: W0717 20:51:42.661906   16770 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Jul 17 20:51:42 old-k8s-version-378000 kubelet[16770]: W0717 20:51:42.661975   16770 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Jul 17 20:51:42 old-k8s-version-378000 kubelet[16770]: F0717 20:51:42.662001   16770 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Jul 17 20:51:42 old-k8s-version-378000 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jul 17 20:51:42 old-k8s-version-378000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jul 17 20:51:43 old-k8s-version-378000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 157.
	Jul 17 20:51:43 old-k8s-version-378000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jul 17 20:51:43 old-k8s-version-378000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jul 17 20:51:43 old-k8s-version-378000 kubelet[16875]: I0717 20:51:43.396668   16875 server.go:410] Version: v1.16.0
	Jul 17 20:51:43 old-k8s-version-378000 kubelet[16875]: I0717 20:51:43.397075   16875 plugins.go:100] No cloud provider specified.
	Jul 17 20:51:43 old-k8s-version-378000 kubelet[16875]: I0717 20:51:43.397086   16875 server.go:773] Client rotation is on, will bootstrap in background
	Jul 17 20:51:43 old-k8s-version-378000 kubelet[16875]: I0717 20:51:43.401154   16875 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jul 17 20:51:43 old-k8s-version-378000 kubelet[16875]: W0717 20:51:43.401809   16875 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Jul 17 20:51:43 old-k8s-version-378000 kubelet[16875]: W0717 20:51:43.401877   16875 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Jul 17 20:51:43 old-k8s-version-378000 kubelet[16875]: F0717 20:51:43.401903   16875 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Jul 17 20:51:43 old-k8s-version-378000 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jul 17 20:51:43 old-k8s-version-378000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 13:51:43.414013   55746 logs.go:195] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-378000 -n old-k8s-version-378000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-378000 -n old-k8s-version-378000: exit status 2 (365.099552ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-378000" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (507.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59373/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0717 13:51:56.290908   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/kubenet-859000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59373/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59373/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59373/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59373/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0717 13:52:36.453773   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/functional-625000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59373/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59373/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59373/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59373/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59373/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59373/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59373/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0717 13:53:48.403534   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/calico-859000/client.crt: no such file or directory
E0717 13:53:50.804184   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/no-preload-148000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59373/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0717 13:53:58.403401   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/custom-flannel-859000/client.crt: no such file or directory
E0717 13:54:00.806558   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/false-859000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59373/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59373/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0717 13:54:18.497375   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/no-preload-148000/client.crt: no such file or directory
E0717 13:54:18.618677   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/auto-859000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59373/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0717 13:54:32.026852   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/skaffold-508000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59373/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59373/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59373/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59373/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0717 13:55:12.777746   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/enable-default-cni-859000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59373/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0717 13:55:21.451118   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/custom-flannel-859000/client.crt: no such file or directory
E0717 13:55:23.856962   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/false-859000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59373/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0717 13:55:27.741969   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/flannel-859000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59373/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0717 13:55:41.856824   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/kindnet-859000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59373/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59373/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59373/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59373/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0717 13:56:25.567733   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/bridge-859000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59373/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0717 13:56:35.825524   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/enable-default-cni-859000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59373/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59373/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0717 13:56:50.792687   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/flannel-859000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59373/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0717 13:56:56.292176   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/kubenet-859000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59373/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59373/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0717 13:57:25.357020   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/calico-859000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59373/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0717 13:57:33.592278   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/addons-702000/client.crt: no such file or directory
E0717 13:57:35.079780   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/skaffold-508000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59373/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0717 13:57:36.453282   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/functional-625000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59373/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0717 13:57:48.615997   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/bridge-859000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59373/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59373/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59373/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0717 13:58:19.341208   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/kubenet-859000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59373/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59373/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59373/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0717 13:58:50.805021   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/no-preload-148000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59373/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0717 13:58:56.639802   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/addons-702000/client.crt: no such file or directory
E0717 13:58:58.405427   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/custom-flannel-859000/client.crt: no such file or directory
E0717 13:59:00.808626   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/false-859000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59373/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59373/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0717 13:59:18.619229   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/auto-859000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59373/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59373/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59373/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59373/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59373/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59373/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59373/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0717 14:00:27.742851   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/flannel-859000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59373/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0717 14:00:41.858884   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/kindnet-859000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded - error from a previous attempt: EOF
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-378000 -n old-k8s-version-378000
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-378000 -n old-k8s-version-378000: exit status 2 (413.819658ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "old-k8s-version-378000" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-378000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-378000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b5cf72528f711ac4193d0a9a8f59a539b1c9a82e56b492c4504ac5a55d9b9666",
	        "Created": "2023-07-17T20:37:05.574347632Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 741668,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-07-17T20:43:18.243592347Z",
	            "FinishedAt": "2023-07-17T20:43:15.526421136Z"
	        },
	        "Image": "sha256:c6cc01e6091959400f260dc442708e7c71630b58dab1f7c344cb00926bd84950",
	        "ResolvConfPath": "/var/lib/docker/containers/b5cf72528f711ac4193d0a9a8f59a539b1c9a82e56b492c4504ac5a55d9b9666/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b5cf72528f711ac4193d0a9a8f59a539b1c9a82e56b492c4504ac5a55d9b9666/hostname",
	        "HostsPath": "/var/lib/docker/containers/b5cf72528f711ac4193d0a9a8f59a539b1c9a82e56b492c4504ac5a55d9b9666/hosts",
	        "LogPath": "/var/lib/docker/containers/b5cf72528f711ac4193d0a9a8f59a539b1c9a82e56b492c4504ac5a55d9b9666/b5cf72528f711ac4193d0a9a8f59a539b1c9a82e56b492c4504ac5a55d9b9666-json.log",
	        "Name": "/old-k8s-version-378000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-378000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-378000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/92e903a37111c1be0b41a42b0b482279b759da84e66b3f0a99d79bad046a816b-init/diff:/var/lib/docker/overlay2/e56ac82b253363a3e2a8ef1d32b035837a0160e70c091e0204df14a88b273cb0/diff",
	                "MergedDir": "/var/lib/docker/overlay2/92e903a37111c1be0b41a42b0b482279b759da84e66b3f0a99d79bad046a816b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/92e903a37111c1be0b41a42b0b482279b759da84e66b3f0a99d79bad046a816b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/92e903a37111c1be0b41a42b0b482279b759da84e66b3f0a99d79bad046a816b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-378000",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-378000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-378000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-378000",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-378000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "bb4bd38a73f8a928238b33fdcf768f03d1f6e61affe96cf87d115fa3b560c787",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59374"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59375"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59376"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59377"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59373"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/bb4bd38a73f8",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-378000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "b5cf72528f71",
	                        "old-k8s-version-378000"
	                    ],
	                    "NetworkID": "c3d985d4d6f8171a299a582295ee1a9b4b599d36307c61b13f7920634885fa85",
	                    "EndpointID": "c5672ca1166bb360f9c668d41d9fb619c5567113751944a7f3e23dab53a7fe9a",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-378000 -n old-k8s-version-378000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-378000 -n old-k8s-version-378000: exit status 2 (414.565104ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p old-k8s-version-378000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p old-k8s-version-378000 logs -n 25: (1.412530824s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable metrics-server -p embed-certs-688000            | embed-certs-688000           | jenkins | v1.30.1 | 17 Jul 23 13:46 PDT | 17 Jul 23 13:46 PDT |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-688000                                  | embed-certs-688000           | jenkins | v1.30.1 | 17 Jul 23 13:46 PDT | 17 Jul 23 13:46 PDT |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-688000                 | embed-certs-688000           | jenkins | v1.30.1 | 17 Jul 23 13:46 PDT | 17 Jul 23 13:46 PDT |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-688000                                  | embed-certs-688000           | jenkins | v1.30.1 | 17 Jul 23 13:46 PDT | 17 Jul 23 13:52 PDT |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=docker                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                           |                              |         |         |                     |                     |
	| ssh     | -p embed-certs-688000 sudo                             | embed-certs-688000           | jenkins | v1.30.1 | 17 Jul 23 13:52 PDT | 17 Jul 23 13:52 PDT |
	|         | crictl images -o json                                  |                              |         |         |                     |                     |
	| pause   | -p embed-certs-688000                                  | embed-certs-688000           | jenkins | v1.30.1 | 17 Jul 23 13:52 PDT | 17 Jul 23 13:52 PDT |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p embed-certs-688000                                  | embed-certs-688000           | jenkins | v1.30.1 | 17 Jul 23 13:52 PDT | 17 Jul 23 13:52 PDT |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p embed-certs-688000                                  | embed-certs-688000           | jenkins | v1.30.1 | 17 Jul 23 13:52 PDT | 17 Jul 23 13:52 PDT |
	| delete  | -p embed-certs-688000                                  | embed-certs-688000           | jenkins | v1.30.1 | 17 Jul 23 13:52 PDT | 17 Jul 23 13:52 PDT |
	| delete  | -p                                                     | disable-driver-mounts-782000 | jenkins | v1.30.1 | 17 Jul 23 13:52 PDT | 17 Jul 23 13:52 PDT |
	|         | disable-driver-mounts-782000                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-981000 | jenkins | v1.30.1 | 17 Jul 23 13:52 PDT | 17 Jul 23 13:53 PDT |
	|         | default-k8s-diff-port-981000                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-981000  | default-k8s-diff-port-981000 | jenkins | v1.30.1 | 17 Jul 23 13:53 PDT | 17 Jul 23 13:53 PDT |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-981000 | jenkins | v1.30.1 | 17 Jul 23 13:53 PDT | 17 Jul 23 13:53 PDT |
	|         | default-k8s-diff-port-981000                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-981000       | default-k8s-diff-port-981000 | jenkins | v1.30.1 | 17 Jul 23 13:53 PDT | 17 Jul 23 13:53 PDT |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-981000 | jenkins | v1.30.1 | 17 Jul 23 13:53 PDT | 17 Jul 23 13:59 PDT |
	|         | default-k8s-diff-port-981000                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                           |                              |         |         |                     |                     |
	| ssh     | -p                                                     | default-k8s-diff-port-981000 | jenkins | v1.30.1 | 17 Jul 23 13:59 PDT | 17 Jul 23 13:59 PDT |
	|         | default-k8s-diff-port-981000                           |                              |         |         |                     |                     |
	|         | sudo crictl images -o json                             |                              |         |         |                     |                     |
	| pause   | -p                                                     | default-k8s-diff-port-981000 | jenkins | v1.30.1 | 17 Jul 23 13:59 PDT | 17 Jul 23 13:59 PDT |
	|         | default-k8s-diff-port-981000                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p                                                     | default-k8s-diff-port-981000 | jenkins | v1.30.1 | 17 Jul 23 13:59 PDT | 17 Jul 23 13:59 PDT |
	|         | default-k8s-diff-port-981000                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-981000 | jenkins | v1.30.1 | 17 Jul 23 13:59 PDT | 17 Jul 23 13:59 PDT |
	|         | default-k8s-diff-port-981000                           |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-981000 | jenkins | v1.30.1 | 17 Jul 23 13:59 PDT | 17 Jul 23 13:59 PDT |
	|         | default-k8s-diff-port-981000                           |                              |         |         |                     |                     |
	| start   | -p newest-cni-321000 --memory=2200 --alsologtostderr   | newest-cni-321000            | jenkins | v1.30.1 | 17 Jul 23 13:59 PDT | 17 Jul 23 14:00 PDT |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=docker  --kubernetes-version=v1.27.3          |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-321000             | newest-cni-321000            | jenkins | v1.30.1 | 17 Jul 23 14:00 PDT | 17 Jul 23 14:00 PDT |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-321000                                   | newest-cni-321000            | jenkins | v1.30.1 | 17 Jul 23 14:00 PDT | 17 Jul 23 14:00 PDT |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-321000                  | newest-cni-321000            | jenkins | v1.30.1 | 17 Jul 23 14:00 PDT | 17 Jul 23 14:00 PDT |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-321000 --memory=2200 --alsologtostderr   | newest-cni-321000            | jenkins | v1.30.1 | 17 Jul 23 14:00 PDT |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=docker  --kubernetes-version=v1.27.3          |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/07/17 14:00:24
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.20.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 14:00:24.767672   56587 out.go:296] Setting OutFile to fd 1 ...
	I0717 14:00:24.767842   56587 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 14:00:24.767847   56587 out.go:309] Setting ErrFile to fd 2...
	I0717 14:00:24.767852   56587 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 14:00:24.768026   56587 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/16890-37879/.minikube/bin
	I0717 14:00:24.769372   56587 out.go:303] Setting JSON to false
	I0717 14:00:24.788404   56587 start.go:128] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":17995,"bootTime":1689609629,"procs":408,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0717 14:00:24.788489   56587 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0717 14:00:24.810094   56587 out.go:177] * [newest-cni-321000] minikube v1.30.1 on Darwin 13.4.1
	I0717 14:00:24.852116   56587 notify.go:220] Checking for updates...
	I0717 14:00:24.872873   56587 out.go:177]   - MINIKUBE_LOCATION=16890
	I0717 14:00:24.894085   56587 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16890-37879/kubeconfig
	I0717 14:00:24.915169   56587 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0717 14:00:24.936019   56587 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 14:00:24.957258   56587 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16890-37879/.minikube
	I0717 14:00:24.978125   56587 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 14:00:24.999760   56587 config.go:182] Loaded profile config "newest-cni-321000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0717 14:00:25.000465   56587 driver.go:373] Setting default libvirt URI to qemu:///system
	I0717 14:00:25.056749   56587 docker.go:121] docker version: linux-24.0.2:Docker Desktop 4.21.1 (114176)
	I0717 14:00:25.056872   56587 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 14:00:25.152179   56587 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:64 OomKillDisable:false NGoroutines:68 SystemTime:2023-07-17 21:00:25.142114854 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:5.15.49-linuxkit-pr OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexS
erverAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=built
in name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Pat
h:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.6] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Command line tool for Docker Scout Vendor:Docker Inc. Version:0.16.1]] Warnings:<n
il>}}
	I0717 14:00:25.173336   56587 out.go:177] * Using the docker driver based on existing profile
	I0717 14:00:25.215354   56587 start.go:298] selected driver: docker
	I0717 14:00:25.215377   56587 start.go:880] validating driver "docker" against &{Name:newest-cni-321000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:newest-cni-321000 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNo
deRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 14:00:25.215529   56587 start.go:891] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 14:00:25.219503   56587 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 14:00:25.324129   56587 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:64 OomKillDisable:false NGoroutines:68 SystemTime:2023-07-17 21:00:25.307069883 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:5.15.49-linuxkit-pr OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexS
erverAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=built
in name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Pat
h:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.6] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Command line tool for Docker Scout Vendor:Docker Inc. Version:0.16.1]] Warnings:<n
il>}}
	I0717 14:00:25.324372   56587 start_flags.go:938] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0717 14:00:25.324395   56587 cni.go:84] Creating CNI manager for ""
	I0717 14:00:25.324405   56587 cni.go:149] "docker" driver + "docker" runtime found, recommending kindnet
	I0717 14:00:25.324416   56587 start_flags.go:319] config:
	{Name:newest-cni-321000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:newest-cni-321000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: Networ
kPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString
:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 14:00:25.347970   56587 out.go:177] * Starting control plane node newest-cni-321000 in cluster newest-cni-321000
	I0717 14:00:25.368698   56587 cache.go:122] Beginning downloading kic base image for docker with docker
	I0717 14:00:25.389630   56587 out.go:177] * Pulling base image ...
	I0717 14:00:25.431798   56587 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0717 14:00:25.431799   56587 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon
	I0717 14:00:25.431897   56587 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16890-37879/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-amd64.tar.lz4
	I0717 14:00:25.431924   56587 cache.go:57] Caching tarball of preloaded images
	I0717 14:00:25.432145   56587 preload.go:174] Found /Users/jenkins/minikube-integration/16890-37879/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0717 14:00:25.432745   56587 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0717 14:00:25.433324   56587 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/newest-cni-321000/config.json ...
	I0717 14:00:25.482717   56587 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon, skipping pull
	I0717 14:00:25.482735   56587 cache.go:145] gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 exists in daemon, skipping load
	I0717 14:00:25.482841   56587 cache.go:195] Successfully downloaded all kic artifacts
	I0717 14:00:25.482905   56587 start.go:365] acquiring machines lock for newest-cni-321000: {Name:mk2bee24d6161181ff3fee24ba20b24c4c42ef16 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 14:00:25.483002   56587 start.go:369] acquired machines lock for "newest-cni-321000" in 75.787µs
	I0717 14:00:25.483028   56587 start.go:96] Skipping create...Using existing machine configuration
	I0717 14:00:25.483036   56587 fix.go:54] fixHost starting: 
	I0717 14:00:25.483283   56587 cli_runner.go:164] Run: docker container inspect newest-cni-321000 --format={{.State.Status}}
	I0717 14:00:25.532772   56587 fix.go:102] recreateIfNeeded on newest-cni-321000: state=Stopped err=<nil>
	W0717 14:00:25.532821   56587 fix.go:128] unexpected machine state, will restart: <nil>
	I0717 14:00:25.554465   56587 out.go:177] * Restarting existing docker container for "newest-cni-321000" ...
	I0717 14:00:25.597499   56587 cli_runner.go:164] Run: docker start newest-cni-321000
	I0717 14:00:25.840837   56587 cli_runner.go:164] Run: docker container inspect newest-cni-321000 --format={{.State.Status}}
	I0717 14:00:25.891648   56587 kic.go:426] container "newest-cni-321000" state is running.
	I0717 14:00:25.892235   56587 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-321000
	I0717 14:00:25.944520   56587 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/newest-cni-321000/config.json ...
	I0717 14:00:25.944887   56587 machine.go:88] provisioning docker machine ...
	I0717 14:00:25.944910   56587 ubuntu.go:169] provisioning hostname "newest-cni-321000"
	I0717 14:00:25.944993   56587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-321000
	I0717 14:00:26.002364   56587 main.go:141] libmachine: Using SSH client type: native
	I0717 14:00:26.002813   56587 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140d2c0] 0x1410360 <nil>  [] 0s} 127.0.0.1 60369 <nil> <nil>}
	I0717 14:00:26.002832   56587 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-321000 && echo "newest-cni-321000" | sudo tee /etc/hostname
	I0717 14:00:26.004278   56587 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0717 14:00:29.143011   56587 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-321000
	
	I0717 14:00:29.143094   56587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-321000
	I0717 14:00:29.193528   56587 main.go:141] libmachine: Using SSH client type: native
	I0717 14:00:29.193868   56587 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140d2c0] 0x1410360 <nil>  [] 0s} 127.0.0.1 60369 <nil> <nil>}
	I0717 14:00:29.193881   56587 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-321000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-321000/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-321000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 14:00:29.323469   56587 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 14:00:29.323493   56587 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/16890-37879/.minikube CaCertPath:/Users/jenkins/minikube-integration/16890-37879/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/16890-37879/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/16890-37879/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/16890-37879/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/16890-37879/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/16890-37879/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/16890-37879/.minikube}
	I0717 14:00:29.323513   56587 ubuntu.go:177] setting up certificates
	I0717 14:00:29.323521   56587 provision.go:83] configureAuth start
	I0717 14:00:29.323600   56587 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-321000
	I0717 14:00:29.372778   56587 provision.go:138] copyHostCerts
	I0717 14:00:29.372880   56587 exec_runner.go:144] found /Users/jenkins/minikube-integration/16890-37879/.minikube/key.pem, removing ...
	I0717 14:00:29.372890   56587 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/16890-37879/.minikube/key.pem
	I0717 14:00:29.373013   56587 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16890-37879/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/16890-37879/.minikube/key.pem (1679 bytes)
	I0717 14:00:29.373243   56587 exec_runner.go:144] found /Users/jenkins/minikube-integration/16890-37879/.minikube/ca.pem, removing ...
	I0717 14:00:29.373249   56587 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/16890-37879/.minikube/ca.pem
	I0717 14:00:29.373318   56587 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16890-37879/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/16890-37879/.minikube/ca.pem (1078 bytes)
	I0717 14:00:29.373506   56587 exec_runner.go:144] found /Users/jenkins/minikube-integration/16890-37879/.minikube/cert.pem, removing ...
	I0717 14:00:29.373512   56587 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/16890-37879/.minikube/cert.pem
	I0717 14:00:29.373580   56587 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16890-37879/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/16890-37879/.minikube/cert.pem (1123 bytes)
	I0717 14:00:29.373711   56587 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/16890-37879/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/16890-37879/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/16890-37879/.minikube/certs/ca-key.pem org=jenkins.newest-cni-321000 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube newest-cni-321000]
	I0717 14:00:29.466369   56587 provision.go:172] copyRemoteCerts
	I0717 14:00:29.466427   56587 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 14:00:29.466480   56587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-321000
	I0717 14:00:29.515642   56587 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60369 SSHKeyPath:/Users/jenkins/minikube-integration/16890-37879/.minikube/machines/newest-cni-321000/id_rsa Username:docker}
	I0717 14:00:29.609570   56587 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16890-37879/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0717 14:00:29.630245   56587 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16890-37879/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 14:00:29.651262   56587 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16890-37879/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0717 14:00:29.672368   56587 provision.go:86] duration metric: configureAuth took 348.840251ms
	I0717 14:00:29.672381   56587 ubuntu.go:193] setting minikube options for container-runtime
	I0717 14:00:29.672527   56587 config.go:182] Loaded profile config "newest-cni-321000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0717 14:00:29.672587   56587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-321000
	I0717 14:00:29.722189   56587 main.go:141] libmachine: Using SSH client type: native
	I0717 14:00:29.722531   56587 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140d2c0] 0x1410360 <nil>  [] 0s} 127.0.0.1 60369 <nil> <nil>}
	I0717 14:00:29.722542   56587 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0717 14:00:29.849516   56587 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0717 14:00:29.849531   56587 ubuntu.go:71] root file system type: overlay
	I0717 14:00:29.849608   56587 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0717 14:00:29.849700   56587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-321000
	I0717 14:00:29.899575   56587 main.go:141] libmachine: Using SSH client type: native
	I0717 14:00:29.899919   56587 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140d2c0] 0x1410360 <nil>  [] 0s} 127.0.0.1 60369 <nil> <nil>}
	I0717 14:00:29.899973   56587 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0717 14:00:30.039217   56587 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0717 14:00:30.039333   56587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-321000
	I0717 14:00:30.088982   56587 main.go:141] libmachine: Using SSH client type: native
	I0717 14:00:30.089340   56587 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140d2c0] 0x1410360 <nil>  [] 0s} 127.0.0.1 60369 <nil> <nil>}
	I0717 14:00:30.089354   56587 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0717 14:00:30.221055   56587 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 14:00:30.221071   56587 machine.go:91] provisioned docker machine in 4.276222101s
	I0717 14:00:30.221083   56587 start.go:300] post-start starting for "newest-cni-321000" (driver="docker")
	I0717 14:00:30.221097   56587 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 14:00:30.221169   56587 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 14:00:30.221232   56587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-321000
	I0717 14:00:30.270891   56587 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60369 SSHKeyPath:/Users/jenkins/minikube-integration/16890-37879/.minikube/machines/newest-cni-321000/id_rsa Username:docker}
	I0717 14:00:30.364322   56587 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 14:00:30.368497   56587 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0717 14:00:30.368518   56587 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0717 14:00:30.368525   56587 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0717 14:00:30.368530   56587 info.go:137] Remote host: Ubuntu 22.04.2 LTS
	I0717 14:00:30.368538   56587 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16890-37879/.minikube/addons for local assets ...
	I0717 14:00:30.368623   56587 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16890-37879/.minikube/files for local assets ...
	I0717 14:00:30.368783   56587 filesync.go:149] local asset: /Users/jenkins/minikube-integration/16890-37879/.minikube/files/etc/ssl/certs/383252.pem -> 383252.pem in /etc/ssl/certs
	I0717 14:00:30.368961   56587 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 14:00:30.378051   56587 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16890-37879/.minikube/files/etc/ssl/certs/383252.pem --> /etc/ssl/certs/383252.pem (1708 bytes)
	I0717 14:00:30.399373   56587 start.go:303] post-start completed in 178.274722ms
	I0717 14:00:30.399459   56587 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 14:00:30.399522   56587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-321000
	I0717 14:00:30.448607   56587 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60369 SSHKeyPath:/Users/jenkins/minikube-integration/16890-37879/.minikube/machines/newest-cni-321000/id_rsa Username:docker}
	I0717 14:00:30.538906   56587 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0717 14:00:30.544045   56587 fix.go:56] fixHost completed within 5.061056018s
	I0717 14:00:30.544063   56587 start.go:83] releasing machines lock for "newest-cni-321000", held for 5.061106899s
	I0717 14:00:30.544153   56587 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-321000
	I0717 14:00:30.594341   56587 ssh_runner.go:195] Run: cat /version.json
	I0717 14:00:30.594360   56587 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 14:00:30.594416   56587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-321000
	I0717 14:00:30.594437   56587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-321000
	I0717 14:00:30.647973   56587 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60369 SSHKeyPath:/Users/jenkins/minikube-integration/16890-37879/.minikube/machines/newest-cni-321000/id_rsa Username:docker}
	I0717 14:00:30.647991   56587 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60369 SSHKeyPath:/Users/jenkins/minikube-integration/16890-37879/.minikube/machines/newest-cni-321000/id_rsa Username:docker}
	W0717 14:00:30.839919   56587 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.31.0 -> Actual minikube version: v1.30.1
	I0717 14:00:30.840003   56587 ssh_runner.go:195] Run: systemctl --version
	I0717 14:00:30.845429   56587 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0717 14:00:30.850685   56587 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0717 14:00:30.868179   56587 cni.go:236] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0717 14:00:30.868247   56587 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 14:00:30.877255   56587 cni.go:265] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0717 14:00:30.877277   56587 start.go:469] detecting cgroup driver to use...
	I0717 14:00:30.877291   56587 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0717 14:00:30.877396   56587 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 14:00:30.892630   56587 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0717 14:00:30.902658   56587 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0717 14:00:30.912240   56587 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0717 14:00:30.912302   56587 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0717 14:00:30.922182   56587 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0717 14:00:30.931854   56587 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0717 14:00:30.941572   56587 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0717 14:00:30.951298   56587 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 14:00:30.960409   56587 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0717 14:00:30.970308   56587 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 14:00:30.978580   56587 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 14:00:30.986984   56587 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 14:00:31.058401   56587 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0717 14:00:31.133011   56587 start.go:469] detecting cgroup driver to use...
	I0717 14:00:31.133034   56587 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0717 14:00:31.133099   56587 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0717 14:00:31.145205   56587 cruntime.go:276] skipping containerd shutdown because we are bound to it
	I0717 14:00:31.145275   56587 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0717 14:00:31.156948   56587 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 14:00:31.173006   56587 ssh_runner.go:195] Run: which cri-dockerd
	I0717 14:00:31.177468   56587 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0717 14:00:31.186468   56587 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0717 14:00:31.227957   56587 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0717 14:00:31.325610   56587 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0717 14:00:31.394894   56587 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0717 14:00:31.394912   56587 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0717 14:00:31.429023   56587 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 14:00:31.498495   56587 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0717 14:00:31.780216   56587 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0717 14:00:31.849620   56587 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0717 14:00:31.922375   56587 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0717 14:00:31.992975   56587 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 14:00:32.069258   56587 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0717 14:00:32.082931   56587 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 14:00:32.158373   56587 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0717 14:00:32.242036   56587 start.go:516] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0717 14:00:32.242183   56587 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0717 14:00:32.246776   56587 start.go:537] Will wait 60s for crictl version
	I0717 14:00:32.246842   56587 ssh_runner.go:195] Run: which crictl
	I0717 14:00:32.251190   56587 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 14:00:32.294958   56587 start.go:553] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.4
	RuntimeApiVersion:  v1
	I0717 14:00:32.295045   56587 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0717 14:00:32.318919   56587 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0717 14:00:32.387267   56587 out.go:204] * Preparing Kubernetes v1.27.3 on Docker 24.0.4 ...
	I0717 14:00:32.387486   56587 cli_runner.go:164] Run: docker exec -t newest-cni-321000 dig +short host.docker.internal
	I0717 14:00:32.502875   56587 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I0717 14:00:32.503003   56587 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I0717 14:00:32.508060   56587 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 14:00:32.519234   56587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-321000
	I0717 14:00:32.590913   56587 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0717 14:00:32.612593   56587 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0717 14:00:32.612768   56587 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0717 14:00:32.634082   56587 docker.go:636] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.27.3
	registry.k8s.io/kube-scheduler:v1.27.3
	registry.k8s.io/kube-proxy:v1.27.3
	registry.k8s.io/kube-controller-manager:v1.27.3
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0717 14:00:32.634103   56587 docker.go:566] Images already preloaded, skipping extraction
	I0717 14:00:32.634193   56587 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0717 14:00:32.654367   56587 docker.go:636] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.27.3
	registry.k8s.io/kube-scheduler:v1.27.3
	registry.k8s.io/kube-proxy:v1.27.3
	registry.k8s.io/kube-controller-manager:v1.27.3
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0717 14:00:32.654385   56587 cache_images.go:84] Images are preloaded, skipping loading
	I0717 14:00:32.654469   56587 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0717 14:00:32.705515   56587 cni.go:84] Creating CNI manager for ""
	I0717 14:00:32.705532   56587 cni.go:149] "docker" driver + "docker" runtime found, recommending kindnet
	I0717 14:00:32.705550   56587 kubeadm.go:87] Using pod CIDR: 10.42.0.0/16
	I0717 14:00:32.705566   56587 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.27.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-321000 NodeName:newest-cni-321000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArgs:map[
] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 14:00:32.705678   56587 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "newest-cni-321000"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 14:00:32.705745   56587 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --feature-gates=ServerSideApply=true --hostname-override=newest-cni-321000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.3 ClusterName:newest-cni-321000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0717 14:00:32.705809   56587 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.3
	I0717 14:00:32.714629   56587 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 14:00:32.714689   56587 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 14:00:32.723069   56587 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (415 bytes)
	I0717 14:00:32.738955   56587 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 14:00:32.754750   56587 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I0717 14:00:32.771104   56587 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0717 14:00:32.775386   56587 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 14:00:32.786351   56587 certs.go:56] Setting up /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/newest-cni-321000 for IP: 192.168.76.2
	I0717 14:00:32.786368   56587 certs.go:190] acquiring lock for shared ca certs: {Name:mkcb761e9710dc67a00cbdee9d78e096db7e9bb9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 14:00:32.786530   56587 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/16890-37879/.minikube/ca.key
	I0717 14:00:32.786582   56587 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/16890-37879/.minikube/proxy-client-ca.key
	I0717 14:00:32.786678   56587 certs.go:315] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/newest-cni-321000/client.key
	I0717 14:00:32.786739   56587 certs.go:315] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/newest-cni-321000/apiserver.key.31bdca25
	I0717 14:00:32.786790   56587 certs.go:315] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/newest-cni-321000/proxy-client.key
	I0717 14:00:32.787018   56587 certs.go:437] found cert: /Users/jenkins/minikube-integration/16890-37879/.minikube/certs/Users/jenkins/minikube-integration/16890-37879/.minikube/certs/38325.pem (1338 bytes)
	W0717 14:00:32.787054   56587 certs.go:433] ignoring /Users/jenkins/minikube-integration/16890-37879/.minikube/certs/Users/jenkins/minikube-integration/16890-37879/.minikube/certs/38325_empty.pem, impossibly tiny 0 bytes
	I0717 14:00:32.787065   56587 certs.go:437] found cert: /Users/jenkins/minikube-integration/16890-37879/.minikube/certs/Users/jenkins/minikube-integration/16890-37879/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 14:00:32.787100   56587 certs.go:437] found cert: /Users/jenkins/minikube-integration/16890-37879/.minikube/certs/Users/jenkins/minikube-integration/16890-37879/.minikube/certs/ca.pem (1078 bytes)
	I0717 14:00:32.787135   56587 certs.go:437] found cert: /Users/jenkins/minikube-integration/16890-37879/.minikube/certs/Users/jenkins/minikube-integration/16890-37879/.minikube/certs/cert.pem (1123 bytes)
	I0717 14:00:32.787166   56587 certs.go:437] found cert: /Users/jenkins/minikube-integration/16890-37879/.minikube/certs/Users/jenkins/minikube-integration/16890-37879/.minikube/certs/key.pem (1679 bytes)
	I0717 14:00:32.787244   56587 certs.go:437] found cert: /Users/jenkins/minikube-integration/16890-37879/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/16890-37879/.minikube/files/etc/ssl/certs/383252.pem (1708 bytes)
	I0717 14:00:32.787836   56587 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/newest-cni-321000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0717 14:00:32.808913   56587 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/newest-cni-321000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0717 14:00:32.830658   56587 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/newest-cni-321000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 14:00:32.852807   56587 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/newest-cni-321000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 14:00:32.875655   56587 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16890-37879/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 14:00:32.897063   56587 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16890-37879/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 14:00:32.919267   56587 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16890-37879/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 14:00:32.940704   56587 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16890-37879/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0717 14:00:32.961922   56587 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16890-37879/.minikube/files/etc/ssl/certs/383252.pem --> /usr/share/ca-certificates/383252.pem (1708 bytes)
	I0717 14:00:32.983093   56587 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16890-37879/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 14:00:33.004393   56587 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16890-37879/.minikube/certs/38325.pem --> /usr/share/ca-certificates/38325.pem (1338 bytes)
	I0717 14:00:33.025350   56587 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 14:00:33.041536   56587 ssh_runner.go:195] Run: openssl version
	I0717 14:00:33.047739   56587 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 14:00:33.057311   56587 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 14:00:33.061421   56587 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jul 17 19:44 /usr/share/ca-certificates/minikubeCA.pem
	I0717 14:00:33.061465   56587 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 14:00:33.068132   56587 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 14:00:33.076974   56587 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/38325.pem && ln -fs /usr/share/ca-certificates/38325.pem /etc/ssl/certs/38325.pem"
	I0717 14:00:33.086550   56587 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/38325.pem
	I0717 14:00:33.090706   56587 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jul 17 19:49 /usr/share/ca-certificates/38325.pem
	I0717 14:00:33.090754   56587 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/38325.pem
	I0717 14:00:33.097689   56587 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/38325.pem /etc/ssl/certs/51391683.0"
	I0717 14:00:33.106638   56587 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/383252.pem && ln -fs /usr/share/ca-certificates/383252.pem /etc/ssl/certs/383252.pem"
	I0717 14:00:33.116224   56587 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/383252.pem
	I0717 14:00:33.120399   56587 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jul 17 19:49 /usr/share/ca-certificates/383252.pem
	I0717 14:00:33.120445   56587 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/383252.pem
	I0717 14:00:33.127154   56587 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/383252.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 14:00:33.135893   56587 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0717 14:00:33.140026   56587 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 14:00:33.146773   56587 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 14:00:33.153444   56587 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 14:00:33.160115   56587 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 14:00:33.167029   56587 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 14:00:33.173853   56587 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 14:00:33.180696   56587 kubeadm.go:404] StartCluster: {Name:newest-cni-321000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:newest-cni-321000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDo
main:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Ex
traDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 14:00:33.180816   56587 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0717 14:00:33.201323   56587 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 14:00:33.210397   56587 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0717 14:00:33.210411   56587 kubeadm.go:636] restartCluster start
	I0717 14:00:33.210463   56587 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 14:00:33.218762   56587 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 14:00:33.218832   56587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-321000
	I0717 14:00:33.269056   56587 kubeconfig.go:135] verify returned: extract IP: "newest-cni-321000" does not appear in /Users/jenkins/minikube-integration/16890-37879/kubeconfig
	I0717 14:00:33.269207   56587 kubeconfig.go:146] "newest-cni-321000" context is missing from /Users/jenkins/minikube-integration/16890-37879/kubeconfig - will repair!
	I0717 14:00:33.269538   56587 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16890-37879/kubeconfig: {Name:mk0f5d923a936f4479f634933efc75403106a170 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 14:00:33.271167   56587 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 14:00:33.280336   56587 api_server.go:166] Checking apiserver status ...
	I0717 14:00:33.280405   56587 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 14:00:33.290362   56587 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 14:00:33.791582   56587 api_server.go:166] Checking apiserver status ...
	I0717 14:00:33.791747   56587 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 14:00:33.804141   56587 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 14:00:34.292458   56587 api_server.go:166] Checking apiserver status ...
	I0717 14:00:34.292640   56587 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 14:00:34.305008   56587 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 14:00:34.792266   56587 api_server.go:166] Checking apiserver status ...
	I0717 14:00:34.792399   56587 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 14:00:34.804430   56587 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 14:00:35.292007   56587 api_server.go:166] Checking apiserver status ...
	I0717 14:00:35.292198   56587 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 14:00:35.304320   56587 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 14:00:35.790699   56587 api_server.go:166] Checking apiserver status ...
	I0717 14:00:35.790848   56587 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 14:00:35.802931   56587 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 14:00:36.290481   56587 api_server.go:166] Checking apiserver status ...
	I0717 14:00:36.290609   56587 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 14:00:36.303596   56587 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 14:00:36.790989   56587 api_server.go:166] Checking apiserver status ...
	I0717 14:00:36.791063   56587 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 14:00:36.802053   56587 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 14:00:37.292426   56587 api_server.go:166] Checking apiserver status ...
	I0717 14:00:37.292618   56587 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 14:00:37.305380   56587 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 14:00:37.790529   56587 api_server.go:166] Checking apiserver status ...
	I0717 14:00:37.790654   56587 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 14:00:37.801522   56587 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 14:00:38.292444   56587 api_server.go:166] Checking apiserver status ...
	I0717 14:00:38.292622   56587 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 14:00:38.305024   56587 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 14:00:38.791109   56587 api_server.go:166] Checking apiserver status ...
	I0717 14:00:38.791190   56587 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 14:00:38.801586   56587 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 14:00:39.291167   56587 api_server.go:166] Checking apiserver status ...
	I0717 14:00:39.291337   56587 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 14:00:39.303526   56587 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 14:00:39.790436   56587 api_server.go:166] Checking apiserver status ...
	I0717 14:00:39.792141   56587 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 14:00:39.802456   56587 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 14:00:40.291325   56587 api_server.go:166] Checking apiserver status ...
	I0717 14:00:40.291455   56587 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 14:00:40.303687   56587 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 14:00:40.791412   56587 api_server.go:166] Checking apiserver status ...
	I0717 14:00:40.791515   56587 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 14:00:40.804323   56587 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 14:00:41.290511   56587 api_server.go:166] Checking apiserver status ...
	I0717 14:00:41.290621   56587 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 14:00:41.302421   56587 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 14:00:41.790401   56587 api_server.go:166] Checking apiserver status ...
	I0717 14:00:41.790488   56587 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 14:00:41.802718   56587 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 14:00:42.290448   56587 api_server.go:166] Checking apiserver status ...
	I0717 14:00:42.290670   56587 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 14:00:42.302607   56587 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 14:00:42.791640   56587 api_server.go:166] Checking apiserver status ...
	I0717 14:00:42.791757   56587 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 14:00:42.804825   56587 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 14:00:43.280646   56587 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0717 14:00:43.280701   56587 kubeadm.go:1128] stopping kube-system containers ...
	I0717 14:00:43.280833   56587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0717 14:00:43.304726   56587 docker.go:462] Stopping containers: [b82ab9f7110d 53800f131ed8 ce7673d97635 5b97ee3acc28 74fc2e774f7d 20ea1fca7d0a 744cab2865d4 65b7f502beed a1e734a6386e 6a4b0c265759 8b4b58c7830f 25a1b4cfc932 192ba9463b66 a088e41f9e49 55a88ae014e2 9c9dc4f570de 6b842d2db189 56ac6856fc2c]
	I0717 14:00:43.304809   56587 ssh_runner.go:195] Run: docker stop b82ab9f7110d 53800f131ed8 ce7673d97635 5b97ee3acc28 74fc2e774f7d 20ea1fca7d0a 744cab2865d4 65b7f502beed a1e734a6386e 6a4b0c265759 8b4b58c7830f 25a1b4cfc932 192ba9463b66 a088e41f9e49 55a88ae014e2 9c9dc4f570de 6b842d2db189 56ac6856fc2c
	I0717 14:00:43.325981   56587 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0717 14:00:43.338393   56587 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 14:00:43.348038   56587 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Jul 17 20:59 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Jul 17 20:59 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2007 Jul 17 20:59 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Jul 17 20:59 /etc/kubernetes/scheduler.conf
	
	I0717 14:00:43.348110   56587 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 14:00:43.357564   56587 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 14:00:43.367029   56587 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 14:00:43.376594   56587 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0717 14:00:43.376652   56587 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 14:00:43.386819   56587 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 14:00:43.396100   56587 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0717 14:00:43.396150   56587 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 14:00:43.405537   56587 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 14:00:43.414996   56587 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0717 14:00:43.415009   56587 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 14:00:43.464740   56587 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 14:00:44.074402   56587 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0717 14:00:44.217499   56587 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 14:00:44.271469   56587 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0717 14:00:44.357483   56587 api_server.go:52] waiting for apiserver process to appear ...
	I0717 14:00:44.357595   56587 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	
	* 
	* ==> Docker <==
	* Jul 17 20:43:24 old-k8s-version-378000 dockerd[696]: time="2023-07-17T20:43:24.094039630Z" level=info msg="Loading containers: start."
	Jul 17 20:43:24 old-k8s-version-378000 dockerd[696]: time="2023-07-17T20:43:24.181177867Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jul 17 20:43:24 old-k8s-version-378000 dockerd[696]: time="2023-07-17T20:43:24.217420608Z" level=info msg="Loading containers: done."
	Jul 17 20:43:24 old-k8s-version-378000 dockerd[696]: time="2023-07-17T20:43:24.225759654Z" level=info msg="Docker daemon" commit=4ffc614 graphdriver=overlay2 version=24.0.4
	Jul 17 20:43:24 old-k8s-version-378000 dockerd[696]: time="2023-07-17T20:43:24.225825774Z" level=info msg="Daemon has completed initialization"
	Jul 17 20:43:24 old-k8s-version-378000 dockerd[696]: time="2023-07-17T20:43:24.255125966Z" level=info msg="API listen on [::]:2376"
	Jul 17 20:43:24 old-k8s-version-378000 dockerd[696]: time="2023-07-17T20:43:24.255164231Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 17 20:43:24 old-k8s-version-378000 systemd[1]: Started Docker Application Container Engine.
	Jul 17 20:43:31 old-k8s-version-378000 systemd[1]: Stopping Docker Application Container Engine...
	Jul 17 20:43:31 old-k8s-version-378000 dockerd[696]: time="2023-07-17T20:43:31.581664014Z" level=info msg="Processing signal 'terminated'"
	Jul 17 20:43:31 old-k8s-version-378000 dockerd[696]: time="2023-07-17T20:43:31.582662043Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 17 20:43:31 old-k8s-version-378000 dockerd[696]: time="2023-07-17T20:43:31.582919471Z" level=info msg="Daemon shutdown complete"
	Jul 17 20:43:31 old-k8s-version-378000 systemd[1]: docker.service: Deactivated successfully.
	Jul 17 20:43:31 old-k8s-version-378000 systemd[1]: Stopped Docker Application Container Engine.
	Jul 17 20:43:31 old-k8s-version-378000 systemd[1]: Starting Docker Application Container Engine...
	Jul 17 20:43:31 old-k8s-version-378000 dockerd[919]: time="2023-07-17T20:43:31.637367128Z" level=info msg="Starting up"
	Jul 17 20:43:31 old-k8s-version-378000 dockerd[919]: time="2023-07-17T20:43:31.768639802Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 17 20:43:31 old-k8s-version-378000 dockerd[919]: time="2023-07-17T20:43:31.930935601Z" level=info msg="Loading containers: start."
	Jul 17 20:43:32 old-k8s-version-378000 dockerd[919]: time="2023-07-17T20:43:32.076080810Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jul 17 20:43:32 old-k8s-version-378000 dockerd[919]: time="2023-07-17T20:43:32.150208855Z" level=info msg="Loading containers: done."
	Jul 17 20:43:32 old-k8s-version-378000 dockerd[919]: time="2023-07-17T20:43:32.158924505Z" level=info msg="Docker daemon" commit=4ffc614 graphdriver=overlay2 version=24.0.4
	Jul 17 20:43:32 old-k8s-version-378000 dockerd[919]: time="2023-07-17T20:43:32.158986647Z" level=info msg="Daemon has completed initialization"
	Jul 17 20:43:32 old-k8s-version-378000 dockerd[919]: time="2023-07-17T20:43:32.188593160Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 17 20:43:32 old-k8s-version-378000 dockerd[919]: time="2023-07-17T20:43:32.188664710Z" level=info msg="API listen on [::]:2376"
	Jul 17 20:43:32 old-k8s-version-378000 systemd[1]: Started Docker Application Container Engine.
	
	* 
	* ==> container status <==
	* CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
	time="2023-07-17T21:00:46Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/dockershim.sock: connect: no such file or directory\""
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> kernel <==
	*  21:00:46 up  4:58,  0 users,  load average: 0.75, 0.92, 1.01
	Linux old-k8s-version-378000 5.15.49-linuxkit-pr #1 SMP Thu May 25 07:17:40 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.2 LTS"
	
	* 
	* ==> kubelet <==
	* Jul 17 21:00:44 old-k8s-version-378000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jul 17 21:00:45 old-k8s-version-378000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 879.
	Jul 17 21:00:45 old-k8s-version-378000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jul 17 21:00:45 old-k8s-version-378000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jul 17 21:00:45 old-k8s-version-378000 kubelet[26000]: I0717 21:00:45.679684   26000 server.go:410] Version: v1.16.0
	Jul 17 21:00:45 old-k8s-version-378000 kubelet[26000]: I0717 21:00:45.680193   26000 plugins.go:100] No cloud provider specified.
	Jul 17 21:00:45 old-k8s-version-378000 kubelet[26000]: I0717 21:00:45.680235   26000 server.go:773] Client rotation is on, will bootstrap in background
	Jul 17 21:00:45 old-k8s-version-378000 kubelet[26000]: I0717 21:00:45.682186   26000 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jul 17 21:00:45 old-k8s-version-378000 kubelet[26000]: W0717 21:00:45.682942   26000 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Jul 17 21:00:45 old-k8s-version-378000 kubelet[26000]: W0717 21:00:45.683014   26000 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Jul 17 21:00:45 old-k8s-version-378000 kubelet[26000]: F0717 21:00:45.683042   26000 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Jul 17 21:00:45 old-k8s-version-378000 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jul 17 21:00:45 old-k8s-version-378000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jul 17 21:00:46 old-k8s-version-378000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 880.
	Jul 17 21:00:46 old-k8s-version-378000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jul 17 21:00:46 old-k8s-version-378000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jul 17 21:00:46 old-k8s-version-378000 kubelet[26071]: I0717 21:00:46.427627   26071 server.go:410] Version: v1.16.0
	Jul 17 21:00:46 old-k8s-version-378000 kubelet[26071]: I0717 21:00:46.427867   26071 plugins.go:100] No cloud provider specified.
	Jul 17 21:00:46 old-k8s-version-378000 kubelet[26071]: I0717 21:00:46.427878   26071 server.go:773] Client rotation is on, will bootstrap in background
	Jul 17 21:00:46 old-k8s-version-378000 kubelet[26071]: I0717 21:00:46.429917   26071 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jul 17 21:00:46 old-k8s-version-378000 kubelet[26071]: W0717 21:00:46.430690   26071 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Jul 17 21:00:46 old-k8s-version-378000 kubelet[26071]: W0717 21:00:46.430767   26071 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Jul 17 21:00:46 old-k8s-version-378000 kubelet[26071]: F0717 21:00:46.430793   26071 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Jul 17 21:00:46 old-k8s-version-378000 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jul 17 21:00:46 old-k8s-version-378000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 14:00:46.727284   56681 logs.go:195] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-378000 -n old-k8s-version-378000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-378000 -n old-k8s-version-378000: exit status 2 (411.5912ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-378000" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (376.73s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59373/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59373/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59373/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0717 14:01:25.569586   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/bridge-859000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59373/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59373/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59373/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0717 14:01:56.293799   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/kubenet-859000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59373/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59373/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59373/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0717 14:02:25.357122   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/calico-859000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59373/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0717 14:02:33.593427   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/addons-702000/client.crt: no such file or directory
E0717 14:02:36.456561   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/functional-625000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59373/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59373/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59373/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59373/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59373/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0717 14:03:21.434518   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/default-k8s-diff-port-981000/client.crt: no such file or directory
E0717 14:03:21.440375   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/default-k8s-diff-port-981000/client.crt: no such file or directory
E0717 14:03:21.450538   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/default-k8s-diff-port-981000/client.crt: no such file or directory
E0717 14:03:21.471026   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/default-k8s-diff-port-981000/client.crt: no such file or directory
E0717 14:03:21.512613   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/default-k8s-diff-port-981000/client.crt: no such file or directory
E0717 14:03:21.594726   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/default-k8s-diff-port-981000/client.crt: no such file or directory
E0717 14:03:21.755184   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/default-k8s-diff-port-981000/client.crt: no such file or directory
E0717 14:03:22.077369   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/default-k8s-diff-port-981000/client.crt: no such file or directory
E0717 14:03:22.717642   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/default-k8s-diff-port-981000/client.crt: no such file or directory
E0717 14:03:24.000014   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/default-k8s-diff-port-981000/client.crt: no such file or directory
E0717 14:03:26.561215   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/default-k8s-diff-port-981000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59373/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0717 14:03:31.681501   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/default-k8s-diff-port-981000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59373/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0717 14:03:41.921984   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/default-k8s-diff-port-981000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59373/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0717 14:03:50.804515   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/no-preload-148000/client.crt: no such file or directory
E0717 14:03:58.404821   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/custom-flannel-859000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59373/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0717 14:03:59.517102   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/functional-625000/client.crt: no such file or directory
E0717 14:04:00.806519   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/false-859000/client.crt: no such file or directory
E0717 14:04:02.401946   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/default-k8s-diff-port-981000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59373/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0717 14:04:18.618307   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/auto-859000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59373/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59373/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0717 14:04:32.026920   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/skaffold-508000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59373/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0717 14:04:43.361732   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/default-k8s-diff-port-981000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59373/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59373/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59373/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0717 14:05:12.779729   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/enable-default-cni-859000/client.crt: no such file or directory
E0717 14:05:13.858821   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/no-preload-148000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59373/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0717 14:05:27.740751   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/flannel-859000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59373/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59373/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0717 14:05:41.857264   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/kindnet-859000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59373/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59373/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0717 14:06:05.283055   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/default-k8s-diff-port-981000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59373/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59373/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0717 14:06:25.566448   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/bridge-859000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59373/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59373/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59373/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0717 14:06:56.288996   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/kubenet-859000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:59373/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded - error from a previous attempt: EOF
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-378000 -n old-k8s-version-378000
start_stop_delete_test.go:287: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-378000 -n old-k8s-version-378000: exit status 2 (369.852474ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:287: status error: exit status 2 (may be ok)
start_stop_delete_test.go:287: "old-k8s-version-378000" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-378000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-378000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (3.007µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-378000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-378000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-378000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b5cf72528f711ac4193d0a9a8f59a539b1c9a82e56b492c4504ac5a55d9b9666",
	        "Created": "2023-07-17T20:37:05.574347632Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 741668,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-07-17T20:43:18.243592347Z",
	            "FinishedAt": "2023-07-17T20:43:15.526421136Z"
	        },
	        "Image": "sha256:c6cc01e6091959400f260dc442708e7c71630b58dab1f7c344cb00926bd84950",
	        "ResolvConfPath": "/var/lib/docker/containers/b5cf72528f711ac4193d0a9a8f59a539b1c9a82e56b492c4504ac5a55d9b9666/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b5cf72528f711ac4193d0a9a8f59a539b1c9a82e56b492c4504ac5a55d9b9666/hostname",
	        "HostsPath": "/var/lib/docker/containers/b5cf72528f711ac4193d0a9a8f59a539b1c9a82e56b492c4504ac5a55d9b9666/hosts",
	        "LogPath": "/var/lib/docker/containers/b5cf72528f711ac4193d0a9a8f59a539b1c9a82e56b492c4504ac5a55d9b9666/b5cf72528f711ac4193d0a9a8f59a539b1c9a82e56b492c4504ac5a55d9b9666-json.log",
	        "Name": "/old-k8s-version-378000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-378000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-378000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/92e903a37111c1be0b41a42b0b482279b759da84e66b3f0a99d79bad046a816b-init/diff:/var/lib/docker/overlay2/e56ac82b253363a3e2a8ef1d32b035837a0160e70c091e0204df14a88b273cb0/diff",
	                "MergedDir": "/var/lib/docker/overlay2/92e903a37111c1be0b41a42b0b482279b759da84e66b3f0a99d79bad046a816b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/92e903a37111c1be0b41a42b0b482279b759da84e66b3f0a99d79bad046a816b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/92e903a37111c1be0b41a42b0b482279b759da84e66b3f0a99d79bad046a816b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-378000",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-378000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-378000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-378000",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-378000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "bb4bd38a73f8a928238b33fdcf768f03d1f6e61affe96cf87d115fa3b560c787",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59374"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59375"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59376"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59377"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "59373"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/bb4bd38a73f8",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-378000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "b5cf72528f71",
	                        "old-k8s-version-378000"
	                    ],
	                    "NetworkID": "c3d985d4d6f8171a299a582295ee1a9b4b599d36307c61b13f7920634885fa85",
	                    "EndpointID": "c5672ca1166bb360f9c668d41d9fb619c5567113751944a7f3e23dab53a7fe9a",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-378000 -n old-k8s-version-378000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-378000 -n old-k8s-version-378000: exit status 2 (358.372735ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p old-k8s-version-378000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p old-k8s-version-378000 logs -n 25: (1.358467801s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| pause   | -p embed-certs-688000                                  | embed-certs-688000           | jenkins | v1.30.1 | 17 Jul 23 13:52 PDT | 17 Jul 23 13:52 PDT |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p embed-certs-688000                                  | embed-certs-688000           | jenkins | v1.30.1 | 17 Jul 23 13:52 PDT | 17 Jul 23 13:52 PDT |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p embed-certs-688000                                  | embed-certs-688000           | jenkins | v1.30.1 | 17 Jul 23 13:52 PDT | 17 Jul 23 13:52 PDT |
	| delete  | -p embed-certs-688000                                  | embed-certs-688000           | jenkins | v1.30.1 | 17 Jul 23 13:52 PDT | 17 Jul 23 13:52 PDT |
	| delete  | -p                                                     | disable-driver-mounts-782000 | jenkins | v1.30.1 | 17 Jul 23 13:52 PDT | 17 Jul 23 13:52 PDT |
	|         | disable-driver-mounts-782000                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-981000 | jenkins | v1.30.1 | 17 Jul 23 13:52 PDT | 17 Jul 23 13:53 PDT |
	|         | default-k8s-diff-port-981000                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-981000  | default-k8s-diff-port-981000 | jenkins | v1.30.1 | 17 Jul 23 13:53 PDT | 17 Jul 23 13:53 PDT |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-981000 | jenkins | v1.30.1 | 17 Jul 23 13:53 PDT | 17 Jul 23 13:53 PDT |
	|         | default-k8s-diff-port-981000                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-981000       | default-k8s-diff-port-981000 | jenkins | v1.30.1 | 17 Jul 23 13:53 PDT | 17 Jul 23 13:53 PDT |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-981000 | jenkins | v1.30.1 | 17 Jul 23 13:53 PDT | 17 Jul 23 13:59 PDT |
	|         | default-k8s-diff-port-981000                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                           |                              |         |         |                     |                     |
	| ssh     | -p                                                     | default-k8s-diff-port-981000 | jenkins | v1.30.1 | 17 Jul 23 13:59 PDT | 17 Jul 23 13:59 PDT |
	|         | default-k8s-diff-port-981000                           |                              |         |         |                     |                     |
	|         | sudo crictl images -o json                             |                              |         |         |                     |                     |
	| pause   | -p                                                     | default-k8s-diff-port-981000 | jenkins | v1.30.1 | 17 Jul 23 13:59 PDT | 17 Jul 23 13:59 PDT |
	|         | default-k8s-diff-port-981000                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p                                                     | default-k8s-diff-port-981000 | jenkins | v1.30.1 | 17 Jul 23 13:59 PDT | 17 Jul 23 13:59 PDT |
	|         | default-k8s-diff-port-981000                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-981000 | jenkins | v1.30.1 | 17 Jul 23 13:59 PDT | 17 Jul 23 13:59 PDT |
	|         | default-k8s-diff-port-981000                           |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-981000 | jenkins | v1.30.1 | 17 Jul 23 13:59 PDT | 17 Jul 23 13:59 PDT |
	|         | default-k8s-diff-port-981000                           |                              |         |         |                     |                     |
	| start   | -p newest-cni-321000 --memory=2200 --alsologtostderr   | newest-cni-321000            | jenkins | v1.30.1 | 17 Jul 23 13:59 PDT | 17 Jul 23 14:00 PDT |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=docker  --kubernetes-version=v1.27.3          |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-321000             | newest-cni-321000            | jenkins | v1.30.1 | 17 Jul 23 14:00 PDT | 17 Jul 23 14:00 PDT |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-321000                                   | newest-cni-321000            | jenkins | v1.30.1 | 17 Jul 23 14:00 PDT | 17 Jul 23 14:00 PDT |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-321000                  | newest-cni-321000            | jenkins | v1.30.1 | 17 Jul 23 14:00 PDT | 17 Jul 23 14:00 PDT |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-321000 --memory=2200 --alsologtostderr   | newest-cni-321000            | jenkins | v1.30.1 | 17 Jul 23 14:00 PDT | 17 Jul 23 14:00 PDT |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=docker  --kubernetes-version=v1.27.3          |                              |         |         |                     |                     |
	| ssh     | -p newest-cni-321000 sudo                              | newest-cni-321000            | jenkins | v1.30.1 | 17 Jul 23 14:00 PDT | 17 Jul 23 14:00 PDT |
	|         | crictl images -o json                                  |                              |         |         |                     |                     |
	| pause   | -p newest-cni-321000                                   | newest-cni-321000            | jenkins | v1.30.1 | 17 Jul 23 14:00 PDT | 17 Jul 23 14:00 PDT |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-321000                                   | newest-cni-321000            | jenkins | v1.30.1 | 17 Jul 23 14:00 PDT | 17 Jul 23 14:00 PDT |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-321000                                   | newest-cni-321000            | jenkins | v1.30.1 | 17 Jul 23 14:00 PDT | 17 Jul 23 14:01 PDT |
	| delete  | -p newest-cni-321000                                   | newest-cni-321000            | jenkins | v1.30.1 | 17 Jul 23 14:01 PDT | 17 Jul 23 14:01 PDT |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/07/17 14:00:24
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.20.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 14:00:24.767672   56587 out.go:296] Setting OutFile to fd 1 ...
	I0717 14:00:24.767842   56587 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 14:00:24.767847   56587 out.go:309] Setting ErrFile to fd 2...
	I0717 14:00:24.767852   56587 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 14:00:24.768026   56587 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/16890-37879/.minikube/bin
	I0717 14:00:24.769372   56587 out.go:303] Setting JSON to false
	I0717 14:00:24.788404   56587 start.go:128] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":17995,"bootTime":1689609629,"procs":408,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0717 14:00:24.788489   56587 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0717 14:00:24.810094   56587 out.go:177] * [newest-cni-321000] minikube v1.30.1 on Darwin 13.4.1
	I0717 14:00:24.852116   56587 notify.go:220] Checking for updates...
	I0717 14:00:24.872873   56587 out.go:177]   - MINIKUBE_LOCATION=16890
	I0717 14:00:24.894085   56587 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16890-37879/kubeconfig
	I0717 14:00:24.915169   56587 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0717 14:00:24.936019   56587 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 14:00:24.957258   56587 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16890-37879/.minikube
	I0717 14:00:24.978125   56587 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 14:00:24.999760   56587 config.go:182] Loaded profile config "newest-cni-321000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0717 14:00:25.000465   56587 driver.go:373] Setting default libvirt URI to qemu:///system
	I0717 14:00:25.056749   56587 docker.go:121] docker version: linux-24.0.2:Docker Desktop 4.21.1 (114176)
	I0717 14:00:25.056872   56587 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 14:00:25.152179   56587 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:64 OomKillDisable:false NGoroutines:68 SystemTime:2023-07-17 21:00:25.142114854 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:5.15.49-linuxkit-pr OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexS
erverAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=built
in name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Pat
h:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.6] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Command line tool for Docker Scout Vendor:Docker Inc. Version:0.16.1]] Warnings:<n
il>}}
	I0717 14:00:25.173336   56587 out.go:177] * Using the docker driver based on existing profile
	I0717 14:00:25.215354   56587 start.go:298] selected driver: docker
	I0717 14:00:25.215377   56587 start.go:880] validating driver "docker" against &{Name:newest-cni-321000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:newest-cni-321000 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNo
deRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 14:00:25.215529   56587 start.go:891] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 14:00:25.219503   56587 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 14:00:25.324129   56587 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:64 OomKillDisable:false NGoroutines:68 SystemTime:2023-07-17 21:00:25.307069883 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:5.15.49-linuxkit-pr OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexS
erverAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=built
in name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Pat
h:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.6] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Command line tool for Docker Scout Vendor:Docker Inc. Version:0.16.1]] Warnings:<n
il>}}
	I0717 14:00:25.324372   56587 start_flags.go:938] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0717 14:00:25.324395   56587 cni.go:84] Creating CNI manager for ""
	I0717 14:00:25.324405   56587 cni.go:149] "docker" driver + "docker" runtime found, recommending kindnet
	I0717 14:00:25.324416   56587 start_flags.go:319] config:
	{Name:newest-cni-321000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:newest-cni-321000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: Networ
kPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString
:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 14:00:25.347970   56587 out.go:177] * Starting control plane node newest-cni-321000 in cluster newest-cni-321000
	I0717 14:00:25.368698   56587 cache.go:122] Beginning downloading kic base image for docker with docker
	I0717 14:00:25.389630   56587 out.go:177] * Pulling base image ...
	I0717 14:00:25.431798   56587 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0717 14:00:25.431799   56587 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon
	I0717 14:00:25.431897   56587 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16890-37879/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-amd64.tar.lz4
	I0717 14:00:25.431924   56587 cache.go:57] Caching tarball of preloaded images
	I0717 14:00:25.432145   56587 preload.go:174] Found /Users/jenkins/minikube-integration/16890-37879/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0717 14:00:25.432745   56587 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0717 14:00:25.433324   56587 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/newest-cni-321000/config.json ...
	I0717 14:00:25.482717   56587 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon, skipping pull
	I0717 14:00:25.482735   56587 cache.go:145] gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 exists in daemon, skipping load
	I0717 14:00:25.482841   56587 cache.go:195] Successfully downloaded all kic artifacts
	I0717 14:00:25.482905   56587 start.go:365] acquiring machines lock for newest-cni-321000: {Name:mk2bee24d6161181ff3fee24ba20b24c4c42ef16 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 14:00:25.483002   56587 start.go:369] acquired machines lock for "newest-cni-321000" in 75.787µs
	I0717 14:00:25.483028   56587 start.go:96] Skipping create...Using existing machine configuration
	I0717 14:00:25.483036   56587 fix.go:54] fixHost starting: 
	I0717 14:00:25.483283   56587 cli_runner.go:164] Run: docker container inspect newest-cni-321000 --format={{.State.Status}}
	I0717 14:00:25.532772   56587 fix.go:102] recreateIfNeeded on newest-cni-321000: state=Stopped err=<nil>
	W0717 14:00:25.532821   56587 fix.go:128] unexpected machine state, will restart: <nil>
	I0717 14:00:25.554465   56587 out.go:177] * Restarting existing docker container for "newest-cni-321000" ...
	I0717 14:00:25.597499   56587 cli_runner.go:164] Run: docker start newest-cni-321000
	I0717 14:00:25.840837   56587 cli_runner.go:164] Run: docker container inspect newest-cni-321000 --format={{.State.Status}}
	I0717 14:00:25.891648   56587 kic.go:426] container "newest-cni-321000" state is running.
	I0717 14:00:25.892235   56587 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-321000
	I0717 14:00:25.944520   56587 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/newest-cni-321000/config.json ...
	I0717 14:00:25.944887   56587 machine.go:88] provisioning docker machine ...
	I0717 14:00:25.944910   56587 ubuntu.go:169] provisioning hostname "newest-cni-321000"
	I0717 14:00:25.944993   56587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-321000
	I0717 14:00:26.002364   56587 main.go:141] libmachine: Using SSH client type: native
	I0717 14:00:26.002813   56587 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140d2c0] 0x1410360 <nil>  [] 0s} 127.0.0.1 60369 <nil> <nil>}
	I0717 14:00:26.002832   56587 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-321000 && echo "newest-cni-321000" | sudo tee /etc/hostname
	I0717 14:00:26.004278   56587 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0717 14:00:29.143011   56587 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-321000
	
	I0717 14:00:29.143094   56587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-321000
	I0717 14:00:29.193528   56587 main.go:141] libmachine: Using SSH client type: native
	I0717 14:00:29.193868   56587 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140d2c0] 0x1410360 <nil>  [] 0s} 127.0.0.1 60369 <nil> <nil>}
	I0717 14:00:29.193881   56587 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-321000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-321000/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-321000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 14:00:29.323469   56587 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 14:00:29.323493   56587 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/16890-37879/.minikube CaCertPath:/Users/jenkins/minikube-integration/16890-37879/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/16890-37879/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/16890-37879/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/16890-37879/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/16890-37879/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/16890-37879/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/16890-37879/.minikube}
	I0717 14:00:29.323513   56587 ubuntu.go:177] setting up certificates
	I0717 14:00:29.323521   56587 provision.go:83] configureAuth start
	I0717 14:00:29.323600   56587 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-321000
	I0717 14:00:29.372778   56587 provision.go:138] copyHostCerts
	I0717 14:00:29.372880   56587 exec_runner.go:144] found /Users/jenkins/minikube-integration/16890-37879/.minikube/key.pem, removing ...
	I0717 14:00:29.372890   56587 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/16890-37879/.minikube/key.pem
	I0717 14:00:29.373013   56587 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16890-37879/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/16890-37879/.minikube/key.pem (1679 bytes)
	I0717 14:00:29.373243   56587 exec_runner.go:144] found /Users/jenkins/minikube-integration/16890-37879/.minikube/ca.pem, removing ...
	I0717 14:00:29.373249   56587 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/16890-37879/.minikube/ca.pem
	I0717 14:00:29.373318   56587 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16890-37879/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/16890-37879/.minikube/ca.pem (1078 bytes)
	I0717 14:00:29.373506   56587 exec_runner.go:144] found /Users/jenkins/minikube-integration/16890-37879/.minikube/cert.pem, removing ...
	I0717 14:00:29.373512   56587 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/16890-37879/.minikube/cert.pem
	I0717 14:00:29.373580   56587 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16890-37879/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/16890-37879/.minikube/cert.pem (1123 bytes)
	I0717 14:00:29.373711   56587 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/16890-37879/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/16890-37879/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/16890-37879/.minikube/certs/ca-key.pem org=jenkins.newest-cni-321000 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube newest-cni-321000]
	I0717 14:00:29.466369   56587 provision.go:172] copyRemoteCerts
	I0717 14:00:29.466427   56587 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 14:00:29.466480   56587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-321000
	I0717 14:00:29.515642   56587 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60369 SSHKeyPath:/Users/jenkins/minikube-integration/16890-37879/.minikube/machines/newest-cni-321000/id_rsa Username:docker}
	I0717 14:00:29.609570   56587 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16890-37879/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0717 14:00:29.630245   56587 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16890-37879/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 14:00:29.651262   56587 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16890-37879/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0717 14:00:29.672368   56587 provision.go:86] duration metric: configureAuth took 348.840251ms
	I0717 14:00:29.672381   56587 ubuntu.go:193] setting minikube options for container-runtime
	I0717 14:00:29.672527   56587 config.go:182] Loaded profile config "newest-cni-321000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0717 14:00:29.672587   56587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-321000
	I0717 14:00:29.722189   56587 main.go:141] libmachine: Using SSH client type: native
	I0717 14:00:29.722531   56587 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140d2c0] 0x1410360 <nil>  [] 0s} 127.0.0.1 60369 <nil> <nil>}
	I0717 14:00:29.722542   56587 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0717 14:00:29.849516   56587 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0717 14:00:29.849531   56587 ubuntu.go:71] root file system type: overlay
	I0717 14:00:29.849608   56587 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0717 14:00:29.849700   56587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-321000
	I0717 14:00:29.899575   56587 main.go:141] libmachine: Using SSH client type: native
	I0717 14:00:29.899919   56587 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140d2c0] 0x1410360 <nil>  [] 0s} 127.0.0.1 60369 <nil> <nil>}
	I0717 14:00:29.899973   56587 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0717 14:00:30.039217   56587 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0717 14:00:30.039333   56587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-321000
	I0717 14:00:30.088982   56587 main.go:141] libmachine: Using SSH client type: native
	I0717 14:00:30.089340   56587 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140d2c0] 0x1410360 <nil>  [] 0s} 127.0.0.1 60369 <nil> <nil>}
	I0717 14:00:30.089354   56587 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0717 14:00:30.221055   56587 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 14:00:30.221071   56587 machine.go:91] provisioned docker machine in 4.276222101s
	I0717 14:00:30.221083   56587 start.go:300] post-start starting for "newest-cni-321000" (driver="docker")
	I0717 14:00:30.221097   56587 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 14:00:30.221169   56587 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 14:00:30.221232   56587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-321000
	I0717 14:00:30.270891   56587 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60369 SSHKeyPath:/Users/jenkins/minikube-integration/16890-37879/.minikube/machines/newest-cni-321000/id_rsa Username:docker}
	I0717 14:00:30.364322   56587 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 14:00:30.368497   56587 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0717 14:00:30.368518   56587 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0717 14:00:30.368525   56587 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0717 14:00:30.368530   56587 info.go:137] Remote host: Ubuntu 22.04.2 LTS
	I0717 14:00:30.368538   56587 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16890-37879/.minikube/addons for local assets ...
	I0717 14:00:30.368623   56587 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16890-37879/.minikube/files for local assets ...
	I0717 14:00:30.368783   56587 filesync.go:149] local asset: /Users/jenkins/minikube-integration/16890-37879/.minikube/files/etc/ssl/certs/383252.pem -> 383252.pem in /etc/ssl/certs
	I0717 14:00:30.368961   56587 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 14:00:30.378051   56587 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16890-37879/.minikube/files/etc/ssl/certs/383252.pem --> /etc/ssl/certs/383252.pem (1708 bytes)
	I0717 14:00:30.399373   56587 start.go:303] post-start completed in 178.274722ms
	I0717 14:00:30.399459   56587 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 14:00:30.399522   56587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-321000
	I0717 14:00:30.448607   56587 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60369 SSHKeyPath:/Users/jenkins/minikube-integration/16890-37879/.minikube/machines/newest-cni-321000/id_rsa Username:docker}
	I0717 14:00:30.538906   56587 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0717 14:00:30.544045   56587 fix.go:56] fixHost completed within 5.061056018s
	I0717 14:00:30.544063   56587 start.go:83] releasing machines lock for "newest-cni-321000", held for 5.061106899s
	I0717 14:00:30.544153   56587 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-321000
	I0717 14:00:30.594341   56587 ssh_runner.go:195] Run: cat /version.json
	I0717 14:00:30.594360   56587 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 14:00:30.594416   56587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-321000
	I0717 14:00:30.594437   56587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-321000
	I0717 14:00:30.647973   56587 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60369 SSHKeyPath:/Users/jenkins/minikube-integration/16890-37879/.minikube/machines/newest-cni-321000/id_rsa Username:docker}
	I0717 14:00:30.647991   56587 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60369 SSHKeyPath:/Users/jenkins/minikube-integration/16890-37879/.minikube/machines/newest-cni-321000/id_rsa Username:docker}
	W0717 14:00:30.839919   56587 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.31.0 -> Actual minikube version: v1.30.1
	I0717 14:00:30.840003   56587 ssh_runner.go:195] Run: systemctl --version
	I0717 14:00:30.845429   56587 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0717 14:00:30.850685   56587 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0717 14:00:30.868179   56587 cni.go:236] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0717 14:00:30.868247   56587 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 14:00:30.877255   56587 cni.go:265] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0717 14:00:30.877277   56587 start.go:469] detecting cgroup driver to use...
	I0717 14:00:30.877291   56587 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0717 14:00:30.877396   56587 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 14:00:30.892630   56587 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0717 14:00:30.902658   56587 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0717 14:00:30.912240   56587 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0717 14:00:30.912302   56587 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0717 14:00:30.922182   56587 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0717 14:00:30.931854   56587 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0717 14:00:30.941572   56587 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0717 14:00:30.951298   56587 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 14:00:30.960409   56587 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0717 14:00:30.970308   56587 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 14:00:30.978580   56587 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 14:00:30.986984   56587 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 14:00:31.058401   56587 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0717 14:00:31.133011   56587 start.go:469] detecting cgroup driver to use...
	I0717 14:00:31.133034   56587 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0717 14:00:31.133099   56587 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0717 14:00:31.145205   56587 cruntime.go:276] skipping containerd shutdown because we are bound to it
	I0717 14:00:31.145275   56587 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0717 14:00:31.156948   56587 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 14:00:31.173006   56587 ssh_runner.go:195] Run: which cri-dockerd
	I0717 14:00:31.177468   56587 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0717 14:00:31.186468   56587 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0717 14:00:31.227957   56587 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0717 14:00:31.325610   56587 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0717 14:00:31.394894   56587 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0717 14:00:31.394912   56587 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0717 14:00:31.429023   56587 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 14:00:31.498495   56587 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0717 14:00:31.780216   56587 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0717 14:00:31.849620   56587 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0717 14:00:31.922375   56587 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0717 14:00:31.992975   56587 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 14:00:32.069258   56587 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0717 14:00:32.082931   56587 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 14:00:32.158373   56587 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0717 14:00:32.242036   56587 start.go:516] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0717 14:00:32.242183   56587 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0717 14:00:32.246776   56587 start.go:537] Will wait 60s for crictl version
	I0717 14:00:32.246842   56587 ssh_runner.go:195] Run: which crictl
	I0717 14:00:32.251190   56587 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 14:00:32.294958   56587 start.go:553] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.4
	RuntimeApiVersion:  v1
	I0717 14:00:32.295045   56587 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0717 14:00:32.318919   56587 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0717 14:00:32.387267   56587 out.go:204] * Preparing Kubernetes v1.27.3 on Docker 24.0.4 ...
	I0717 14:00:32.387486   56587 cli_runner.go:164] Run: docker exec -t newest-cni-321000 dig +short host.docker.internal
	I0717 14:00:32.502875   56587 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I0717 14:00:32.503003   56587 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I0717 14:00:32.508060   56587 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 14:00:32.519234   56587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-321000
	I0717 14:00:32.590913   56587 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0717 14:00:32.612593   56587 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0717 14:00:32.612768   56587 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0717 14:00:32.634082   56587 docker.go:636] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.27.3
	registry.k8s.io/kube-scheduler:v1.27.3
	registry.k8s.io/kube-proxy:v1.27.3
	registry.k8s.io/kube-controller-manager:v1.27.3
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0717 14:00:32.634103   56587 docker.go:566] Images already preloaded, skipping extraction
	I0717 14:00:32.634193   56587 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0717 14:00:32.654367   56587 docker.go:636] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.27.3
	registry.k8s.io/kube-scheduler:v1.27.3
	registry.k8s.io/kube-proxy:v1.27.3
	registry.k8s.io/kube-controller-manager:v1.27.3
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0717 14:00:32.654385   56587 cache_images.go:84] Images are preloaded, skipping loading
	I0717 14:00:32.654469   56587 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0717 14:00:32.705515   56587 cni.go:84] Creating CNI manager for ""
	I0717 14:00:32.705532   56587 cni.go:149] "docker" driver + "docker" runtime found, recommending kindnet
	I0717 14:00:32.705550   56587 kubeadm.go:87] Using pod CIDR: 10.42.0.0/16
	I0717 14:00:32.705566   56587 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.27.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-321000 NodeName:newest-cni-321000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArgs:map[
] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 14:00:32.705678   56587 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "newest-cni-321000"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 14:00:32.705745   56587 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --feature-gates=ServerSideApply=true --hostname-override=newest-cni-321000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.3 ClusterName:newest-cni-321000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0717 14:00:32.705809   56587 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.3
	I0717 14:00:32.714629   56587 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 14:00:32.714689   56587 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 14:00:32.723069   56587 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (415 bytes)
	I0717 14:00:32.738955   56587 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 14:00:32.754750   56587 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I0717 14:00:32.771104   56587 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0717 14:00:32.775386   56587 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 14:00:32.786351   56587 certs.go:56] Setting up /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/newest-cni-321000 for IP: 192.168.76.2
	I0717 14:00:32.786368   56587 certs.go:190] acquiring lock for shared ca certs: {Name:mkcb761e9710dc67a00cbdee9d78e096db7e9bb9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 14:00:32.786530   56587 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/16890-37879/.minikube/ca.key
	I0717 14:00:32.786582   56587 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/16890-37879/.minikube/proxy-client-ca.key
	I0717 14:00:32.786678   56587 certs.go:315] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/newest-cni-321000/client.key
	I0717 14:00:32.786739   56587 certs.go:315] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/newest-cni-321000/apiserver.key.31bdca25
	I0717 14:00:32.786790   56587 certs.go:315] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/newest-cni-321000/proxy-client.key
	I0717 14:00:32.787018   56587 certs.go:437] found cert: /Users/jenkins/minikube-integration/16890-37879/.minikube/certs/Users/jenkins/minikube-integration/16890-37879/.minikube/certs/38325.pem (1338 bytes)
	W0717 14:00:32.787054   56587 certs.go:433] ignoring /Users/jenkins/minikube-integration/16890-37879/.minikube/certs/Users/jenkins/minikube-integration/16890-37879/.minikube/certs/38325_empty.pem, impossibly tiny 0 bytes
	I0717 14:00:32.787065   56587 certs.go:437] found cert: /Users/jenkins/minikube-integration/16890-37879/.minikube/certs/Users/jenkins/minikube-integration/16890-37879/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 14:00:32.787100   56587 certs.go:437] found cert: /Users/jenkins/minikube-integration/16890-37879/.minikube/certs/Users/jenkins/minikube-integration/16890-37879/.minikube/certs/ca.pem (1078 bytes)
	I0717 14:00:32.787135   56587 certs.go:437] found cert: /Users/jenkins/minikube-integration/16890-37879/.minikube/certs/Users/jenkins/minikube-integration/16890-37879/.minikube/certs/cert.pem (1123 bytes)
	I0717 14:00:32.787166   56587 certs.go:437] found cert: /Users/jenkins/minikube-integration/16890-37879/.minikube/certs/Users/jenkins/minikube-integration/16890-37879/.minikube/certs/key.pem (1679 bytes)
	I0717 14:00:32.787244   56587 certs.go:437] found cert: /Users/jenkins/minikube-integration/16890-37879/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/16890-37879/.minikube/files/etc/ssl/certs/383252.pem (1708 bytes)
	I0717 14:00:32.787836   56587 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/newest-cni-321000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0717 14:00:32.808913   56587 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/newest-cni-321000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0717 14:00:32.830658   56587 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/newest-cni-321000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 14:00:32.852807   56587 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/newest-cni-321000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 14:00:32.875655   56587 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16890-37879/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 14:00:32.897063   56587 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16890-37879/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 14:00:32.919267   56587 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16890-37879/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 14:00:32.940704   56587 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16890-37879/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0717 14:00:32.961922   56587 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16890-37879/.minikube/files/etc/ssl/certs/383252.pem --> /usr/share/ca-certificates/383252.pem (1708 bytes)
	I0717 14:00:32.983093   56587 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16890-37879/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 14:00:33.004393   56587 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16890-37879/.minikube/certs/38325.pem --> /usr/share/ca-certificates/38325.pem (1338 bytes)
	I0717 14:00:33.025350   56587 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 14:00:33.041536   56587 ssh_runner.go:195] Run: openssl version
	I0717 14:00:33.047739   56587 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 14:00:33.057311   56587 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 14:00:33.061421   56587 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jul 17 19:44 /usr/share/ca-certificates/minikubeCA.pem
	I0717 14:00:33.061465   56587 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 14:00:33.068132   56587 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 14:00:33.076974   56587 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/38325.pem && ln -fs /usr/share/ca-certificates/38325.pem /etc/ssl/certs/38325.pem"
	I0717 14:00:33.086550   56587 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/38325.pem
	I0717 14:00:33.090706   56587 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jul 17 19:49 /usr/share/ca-certificates/38325.pem
	I0717 14:00:33.090754   56587 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/38325.pem
	I0717 14:00:33.097689   56587 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/38325.pem /etc/ssl/certs/51391683.0"
	I0717 14:00:33.106638   56587 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/383252.pem && ln -fs /usr/share/ca-certificates/383252.pem /etc/ssl/certs/383252.pem"
	I0717 14:00:33.116224   56587 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/383252.pem
	I0717 14:00:33.120399   56587 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jul 17 19:49 /usr/share/ca-certificates/383252.pem
	I0717 14:00:33.120445   56587 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/383252.pem
	I0717 14:00:33.127154   56587 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/383252.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 14:00:33.135893   56587 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0717 14:00:33.140026   56587 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 14:00:33.146773   56587 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 14:00:33.153444   56587 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 14:00:33.160115   56587 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 14:00:33.167029   56587 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 14:00:33.173853   56587 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 14:00:33.180696   56587 kubeadm.go:404] StartCluster: {Name:newest-cni-321000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:newest-cni-321000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDo
main:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Ex
traDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 14:00:33.180816   56587 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0717 14:00:33.201323   56587 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 14:00:33.210397   56587 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0717 14:00:33.210411   56587 kubeadm.go:636] restartCluster start
	I0717 14:00:33.210463   56587 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 14:00:33.218762   56587 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 14:00:33.218832   56587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-321000
	I0717 14:00:33.269056   56587 kubeconfig.go:135] verify returned: extract IP: "newest-cni-321000" does not appear in /Users/jenkins/minikube-integration/16890-37879/kubeconfig
	I0717 14:00:33.269207   56587 kubeconfig.go:146] "newest-cni-321000" context is missing from /Users/jenkins/minikube-integration/16890-37879/kubeconfig - will repair!
	I0717 14:00:33.269538   56587 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16890-37879/kubeconfig: {Name:mk0f5d923a936f4479f634933efc75403106a170 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 14:00:33.271167   56587 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 14:00:33.280336   56587 api_server.go:166] Checking apiserver status ...
	I0717 14:00:33.280405   56587 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 14:00:33.290362   56587 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 14:00:33.791582   56587 api_server.go:166] Checking apiserver status ...
	I0717 14:00:33.791747   56587 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 14:00:33.804141   56587 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 14:00:34.292458   56587 api_server.go:166] Checking apiserver status ...
	I0717 14:00:34.292640   56587 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 14:00:34.305008   56587 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 14:00:34.792266   56587 api_server.go:166] Checking apiserver status ...
	I0717 14:00:34.792399   56587 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 14:00:34.804430   56587 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 14:00:35.292007   56587 api_server.go:166] Checking apiserver status ...
	I0717 14:00:35.292198   56587 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 14:00:35.304320   56587 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 14:00:35.790699   56587 api_server.go:166] Checking apiserver status ...
	I0717 14:00:35.790848   56587 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 14:00:35.802931   56587 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 14:00:36.290481   56587 api_server.go:166] Checking apiserver status ...
	I0717 14:00:36.290609   56587 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 14:00:36.303596   56587 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 14:00:36.790989   56587 api_server.go:166] Checking apiserver status ...
	I0717 14:00:36.791063   56587 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 14:00:36.802053   56587 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 14:00:37.292426   56587 api_server.go:166] Checking apiserver status ...
	I0717 14:00:37.292618   56587 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 14:00:37.305380   56587 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 14:00:37.790529   56587 api_server.go:166] Checking apiserver status ...
	I0717 14:00:37.790654   56587 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 14:00:37.801522   56587 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 14:00:38.292444   56587 api_server.go:166] Checking apiserver status ...
	I0717 14:00:38.292622   56587 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 14:00:38.305024   56587 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 14:00:38.791109   56587 api_server.go:166] Checking apiserver status ...
	I0717 14:00:38.791190   56587 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 14:00:38.801586   56587 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 14:00:39.291167   56587 api_server.go:166] Checking apiserver status ...
	I0717 14:00:39.291337   56587 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 14:00:39.303526   56587 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 14:00:39.790436   56587 api_server.go:166] Checking apiserver status ...
	I0717 14:00:39.792141   56587 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 14:00:39.802456   56587 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 14:00:40.291325   56587 api_server.go:166] Checking apiserver status ...
	I0717 14:00:40.291455   56587 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 14:00:40.303687   56587 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 14:00:40.791412   56587 api_server.go:166] Checking apiserver status ...
	I0717 14:00:40.791515   56587 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 14:00:40.804323   56587 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 14:00:41.290511   56587 api_server.go:166] Checking apiserver status ...
	I0717 14:00:41.290621   56587 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 14:00:41.302421   56587 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 14:00:41.790401   56587 api_server.go:166] Checking apiserver status ...
	I0717 14:00:41.790488   56587 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 14:00:41.802718   56587 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 14:00:42.290448   56587 api_server.go:166] Checking apiserver status ...
	I0717 14:00:42.290670   56587 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 14:00:42.302607   56587 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 14:00:42.791640   56587 api_server.go:166] Checking apiserver status ...
	I0717 14:00:42.791757   56587 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 14:00:42.804825   56587 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 14:00:43.280646   56587 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0717 14:00:43.280701   56587 kubeadm.go:1128] stopping kube-system containers ...
	I0717 14:00:43.280833   56587 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0717 14:00:43.304726   56587 docker.go:462] Stopping containers: [b82ab9f7110d 53800f131ed8 ce7673d97635 5b97ee3acc28 74fc2e774f7d 20ea1fca7d0a 744cab2865d4 65b7f502beed a1e734a6386e 6a4b0c265759 8b4b58c7830f 25a1b4cfc932 192ba9463b66 a088e41f9e49 55a88ae014e2 9c9dc4f570de 6b842d2db189 56ac6856fc2c]
	I0717 14:00:43.304809   56587 ssh_runner.go:195] Run: docker stop b82ab9f7110d 53800f131ed8 ce7673d97635 5b97ee3acc28 74fc2e774f7d 20ea1fca7d0a 744cab2865d4 65b7f502beed a1e734a6386e 6a4b0c265759 8b4b58c7830f 25a1b4cfc932 192ba9463b66 a088e41f9e49 55a88ae014e2 9c9dc4f570de 6b842d2db189 56ac6856fc2c
	I0717 14:00:43.325981   56587 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0717 14:00:43.338393   56587 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 14:00:43.348038   56587 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Jul 17 20:59 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Jul 17 20:59 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2007 Jul 17 20:59 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Jul 17 20:59 /etc/kubernetes/scheduler.conf
	
	I0717 14:00:43.348110   56587 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 14:00:43.357564   56587 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 14:00:43.367029   56587 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 14:00:43.376594   56587 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0717 14:00:43.376652   56587 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 14:00:43.386819   56587 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 14:00:43.396100   56587 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0717 14:00:43.396150   56587 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 14:00:43.405537   56587 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 14:00:43.414996   56587 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0717 14:00:43.415009   56587 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 14:00:43.464740   56587 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 14:00:44.074402   56587 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0717 14:00:44.217499   56587 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 14:00:44.271469   56587 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0717 14:00:44.357483   56587 api_server.go:52] waiting for apiserver process to appear ...
	I0717 14:00:44.357595   56587 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 14:00:44.925157   56587 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 14:00:45.424890   56587 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 14:00:45.444451   56587 api_server.go:72] duration metric: took 1.086978061s to wait for apiserver process to appear ...
	I0717 14:00:45.444471   56587 api_server.go:88] waiting for apiserver healthz status ...
	I0717 14:00:45.444486   56587 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:60368/healthz ...
	I0717 14:00:45.446278   56587 api_server.go:269] stopped: https://127.0.0.1:60368/healthz: Get "https://127.0.0.1:60368/healthz": EOF
	I0717 14:00:45.946985   56587 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:60368/healthz ...
	I0717 14:00:48.163605   56587 api_server.go:279] https://127.0.0.1:60368/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 14:00:48.163627   56587 api_server.go:103] status: https://127.0.0.1:60368/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 14:00:48.163638   56587 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:60368/healthz ...
	I0717 14:00:48.179051   56587 api_server.go:279] https://127.0.0.1:60368/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 14:00:48.179074   56587 api_server.go:103] status: https://127.0.0.1:60368/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 14:00:48.446446   56587 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:60368/healthz ...
	I0717 14:00:48.453094   56587 api_server.go:279] https://127.0.0.1:60368/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0717 14:00:48.453134   56587 api_server.go:103] status: https://127.0.0.1:60368/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0717 14:00:48.946328   56587 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:60368/healthz ...
	I0717 14:00:48.952777   56587 api_server.go:279] https://127.0.0.1:60368/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0717 14:00:48.952806   56587 api_server.go:103] status: https://127.0.0.1:60368/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0717 14:00:49.446330   56587 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:60368/healthz ...
	I0717 14:00:49.451955   56587 api_server.go:279] https://127.0.0.1:60368/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0717 14:00:49.451974   56587 api_server.go:103] status: https://127.0.0.1:60368/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0717 14:00:49.946347   56587 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:60368/healthz ...
	I0717 14:00:49.953318   56587 api_server.go:279] https://127.0.0.1:60368/healthz returned 200:
	ok
	I0717 14:00:49.961152   56587 api_server.go:141] control plane version: v1.27.3
	I0717 14:00:49.961166   56587 api_server.go:131] duration metric: took 4.516736981s to wait for apiserver health ...
	I0717 14:00:49.961172   56587 cni.go:84] Creating CNI manager for ""
	I0717 14:00:49.961181   56587 cni.go:149] "docker" driver + "docker" runtime found, recommending kindnet
	I0717 14:00:49.982623   56587 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0717 14:00:50.003581   56587 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0717 14:00:50.010346   56587 cni.go:188] applying CNI manifest using /var/lib/minikube/binaries/v1.27.3/kubectl ...
	I0717 14:00:50.010356   56587 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0717 14:00:50.026937   56587 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0717 14:00:50.960745   56587 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 14:00:50.969474   56587 system_pods.go:59] 9 kube-system pods found
	I0717 14:00:50.969494   56587 system_pods.go:61] "coredns-5d78c9869d-qkx5g" [0e0c280d-c43b-40bb-9d50-fe93a470db0c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0717 14:00:50.969502   56587 system_pods.go:61] "etcd-newest-cni-321000" [46676356-9262-4f1c-af96-643a34ab5f06] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0717 14:00:50.969511   56587 system_pods.go:61] "kindnet-952gt" [c7f9185a-9c40-47a2-b39f-f005ee1a5774] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0717 14:00:50.969518   56587 system_pods.go:61] "kube-apiserver-newest-cni-321000" [bd78b2b2-a05e-4577-972b-27d328f22cc8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0717 14:00:50.969524   56587 system_pods.go:61] "kube-controller-manager-newest-cni-321000" [3c4cea69-f1be-47a0-9f64-a40b1c215e85] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0717 14:00:50.969533   56587 system_pods.go:61] "kube-proxy-mwbfh" [e832e015-49c6-48c1-b00b-f2b1672c854d] Running
	I0717 14:00:50.969539   56587 system_pods.go:61] "kube-scheduler-newest-cni-321000" [a8820388-60d3-48df-9647-04a98cad3b3d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0717 14:00:50.969545   56587 system_pods.go:61] "metrics-server-74d5c6b9c-4r6qg" [b448286b-71d1-4130-8583-6fa204f2fc80] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 14:00:50.969549   56587 system_pods.go:61] "storage-provisioner" [a15d449e-1f38-460c-a032-5280892c08fa] Running
	I0717 14:00:50.969561   56587 system_pods.go:74] duration metric: took 8.803866ms to wait for pod list to return data ...
	I0717 14:00:50.969570   56587 node_conditions.go:102] verifying NodePressure condition ...
	I0717 14:00:51.016916   56587 node_conditions.go:122] node storage ephemeral capacity is 115273188Ki
	I0717 14:00:51.016934   56587 node_conditions.go:123] node cpu capacity is 6
	I0717 14:00:51.016946   56587 node_conditions.go:105] duration metric: took 47.37192ms to run NodePressure ...
	I0717 14:00:51.016963   56587 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 14:00:51.327415   56587 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 14:00:51.337047   56587 ops.go:34] apiserver oom_adj: -16
	I0717 14:00:51.337060   56587 kubeadm.go:640] restartCluster took 18.126837401s
	I0717 14:00:51.337067   56587 kubeadm.go:406] StartCluster complete in 18.156569944s
	I0717 14:00:51.337084   56587 settings.go:142] acquiring lock: {Name:mk20aac2aa27f8048925e201531865bdb5a37907 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 14:00:51.337189   56587 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/16890-37879/kubeconfig
	I0717 14:00:51.337807   56587 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16890-37879/kubeconfig: {Name:mk0f5d923a936f4479f634933efc75403106a170 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 14:00:51.338061   56587 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0717 14:00:51.338124   56587 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0717 14:00:51.338188   56587 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-321000"
	I0717 14:00:51.338204   56587 addons.go:231] Setting addon storage-provisioner=true in "newest-cni-321000"
	I0717 14:00:51.338206   56587 addons.go:69] Setting metrics-server=true in profile "newest-cni-321000"
	W0717 14:00:51.338213   56587 addons.go:240] addon storage-provisioner should already be in state true
	I0717 14:00:51.338218   56587 addons.go:231] Setting addon metrics-server=true in "newest-cni-321000"
	W0717 14:00:51.338224   56587 addons.go:240] addon metrics-server should already be in state true
	I0717 14:00:51.338250   56587 host.go:66] Checking if "newest-cni-321000" exists ...
	I0717 14:00:51.338251   56587 addons.go:69] Setting dashboard=true in profile "newest-cni-321000"
	I0717 14:00:51.338247   56587 addons.go:69] Setting default-storageclass=true in profile "newest-cni-321000"
	I0717 14:00:51.338266   56587 host.go:66] Checking if "newest-cni-321000" exists ...
	I0717 14:00:51.338271   56587 addons.go:231] Setting addon dashboard=true in "newest-cni-321000"
	W0717 14:00:51.338281   56587 addons.go:240] addon dashboard should already be in state true
	I0717 14:00:51.338288   56587 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-321000"
	I0717 14:00:51.338307   56587 config.go:182] Loaded profile config "newest-cni-321000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0717 14:00:51.338330   56587 host.go:66] Checking if "newest-cni-321000" exists ...
	I0717 14:00:51.338634   56587 cli_runner.go:164] Run: docker container inspect newest-cni-321000 --format={{.State.Status}}
	I0717 14:00:51.338691   56587 cli_runner.go:164] Run: docker container inspect newest-cni-321000 --format={{.State.Status}}
	I0717 14:00:51.338759   56587 cli_runner.go:164] Run: docker container inspect newest-cni-321000 --format={{.State.Status}}
	I0717 14:00:51.338817   56587 cli_runner.go:164] Run: docker container inspect newest-cni-321000 --format={{.State.Status}}
	I0717 14:00:51.350455   56587 kapi.go:248] "coredns" deployment in "kube-system" namespace and "newest-cni-321000" context rescaled to 1 replicas
	I0717 14:00:51.350492   56587 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0717 14:00:51.374687   56587 out.go:177] * Verifying Kubernetes components...
	I0717 14:00:51.416721   56587 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 14:00:51.452349   56587 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0717 14:00:51.439784   56587 addons.go:231] Setting addon default-storageclass=true in "newest-cni-321000"
	I0717 14:00:51.473520   56587 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	W0717 14:00:51.473545   56587 addons.go:240] addon default-storageclass should already be in state true
	I0717 14:00:51.452372   56587 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 14:00:51.473586   56587 host.go:66] Checking if "newest-cni-321000" exists ...
	I0717 14:00:51.494640   56587 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0717 14:00:51.515416   56587 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0717 14:00:51.536449   56587 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0717 14:00:51.515470   56587 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 14:00:51.515642   56587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-321000
	I0717 14:00:51.516285   56587 cli_runner.go:164] Run: docker container inspect newest-cni-321000 --format={{.State.Status}}
	I0717 14:00:51.534385   56587 start.go:890] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0717 14:00:51.534418   56587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-321000
	I0717 14:00:51.557467   56587 addons.go:423] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0717 14:00:51.557474   56587 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 14:00:51.557483   56587 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0717 14:00:51.558066   56587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-321000
	I0717 14:00:51.558151   56587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-321000
	I0717 14:00:51.639286   56587 api_server.go:52] waiting for apiserver process to appear ...
	I0717 14:00:51.639466   56587 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 14:00:51.639519   56587 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 14:00:51.639495   56587 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60369 SSHKeyPath:/Users/jenkins/minikube-integration/16890-37879/.minikube/machines/newest-cni-321000/id_rsa Username:docker}
	I0717 14:00:51.639530   56587 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 14:00:51.639625   56587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-321000
	I0717 14:00:51.643252   56587 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60369 SSHKeyPath:/Users/jenkins/minikube-integration/16890-37879/.minikube/machines/newest-cni-321000/id_rsa Username:docker}
	I0717 14:00:51.646100   56587 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60369 SSHKeyPath:/Users/jenkins/minikube-integration/16890-37879/.minikube/machines/newest-cni-321000/id_rsa Username:docker}
	I0717 14:00:51.658766   56587 api_server.go:72] duration metric: took 308.246967ms to wait for apiserver process to appear ...
	I0717 14:00:51.658804   56587 api_server.go:88] waiting for apiserver healthz status ...
	I0717 14:00:51.658838   56587 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:60368/healthz ...
	I0717 14:00:51.668090   56587 api_server.go:279] https://127.0.0.1:60368/healthz returned 200:
	ok
	I0717 14:00:51.670251   56587 api_server.go:141] control plane version: v1.27.3
	I0717 14:00:51.670269   56587 api_server.go:131] duration metric: took 11.457543ms to wait for apiserver health ...
	I0717 14:00:51.670277   56587 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 14:00:51.679748   56587 system_pods.go:59] 9 kube-system pods found
	I0717 14:00:51.679773   56587 system_pods.go:61] "coredns-5d78c9869d-qkx5g" [0e0c280d-c43b-40bb-9d50-fe93a470db0c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0717 14:00:51.679788   56587 system_pods.go:61] "etcd-newest-cni-321000" [46676356-9262-4f1c-af96-643a34ab5f06] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0717 14:00:51.679817   56587 system_pods.go:61] "kindnet-952gt" [c7f9185a-9c40-47a2-b39f-f005ee1a5774] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0717 14:00:51.679834   56587 system_pods.go:61] "kube-apiserver-newest-cni-321000" [bd78b2b2-a05e-4577-972b-27d328f22cc8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0717 14:00:51.679840   56587 system_pods.go:61] "kube-controller-manager-newest-cni-321000" [3c4cea69-f1be-47a0-9f64-a40b1c215e85] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0717 14:00:51.679848   56587 system_pods.go:61] "kube-proxy-mwbfh" [e832e015-49c6-48c1-b00b-f2b1672c854d] Running
	I0717 14:00:51.679854   56587 system_pods.go:61] "kube-scheduler-newest-cni-321000" [a8820388-60d3-48df-9647-04a98cad3b3d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0717 14:00:51.679859   56587 system_pods.go:61] "metrics-server-74d5c6b9c-4r6qg" [b448286b-71d1-4130-8583-6fa204f2fc80] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 14:00:51.679865   56587 system_pods.go:61] "storage-provisioner" [a15d449e-1f38-460c-a032-5280892c08fa] Running
	I0717 14:00:51.679870   56587 system_pods.go:74] duration metric: took 9.587849ms to wait for pod list to return data ...
	I0717 14:00:51.679878   56587 default_sa.go:34] waiting for default service account to be created ...
	I0717 14:00:51.683578   56587 default_sa.go:45] found service account: "default"
	I0717 14:00:51.683595   56587 default_sa.go:55] duration metric: took 3.712691ms for default service account to be created ...
	I0717 14:00:51.683605   56587 kubeadm.go:581] duration metric: took 333.097415ms to wait for : map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] ...
	I0717 14:00:51.683619   56587 node_conditions.go:102] verifying NodePressure condition ...
	I0717 14:00:51.688031   56587 node_conditions.go:122] node storage ephemeral capacity is 115273188Ki
	I0717 14:00:51.688047   56587 node_conditions.go:123] node cpu capacity is 6
	I0717 14:00:51.688058   56587 node_conditions.go:105] duration metric: took 4.414721ms to run NodePressure ...
	I0717 14:00:51.688068   56587 start.go:228] waiting for startup goroutines ...
	I0717 14:00:51.710264   56587 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60369 SSHKeyPath:/Users/jenkins/minikube-integration/16890-37879/.minikube/machines/newest-cni-321000/id_rsa Username:docker}
	I0717 14:00:51.765695   56587 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0717 14:00:51.765716   56587 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0717 14:00:51.777173   56587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 14:00:51.778603   56587 addons.go:423] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0717 14:00:51.778614   56587 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0717 14:00:51.818466   56587 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0717 14:00:51.818484   56587 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0717 14:00:51.836341   56587 addons.go:423] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0717 14:00:51.836357   56587 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0717 14:00:51.841731   56587 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 14:00:51.841752   56587 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0717 14:00:51.842759   56587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 14:00:51.917769   56587 addons.go:423] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0717 14:00:51.917791   56587 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0717 14:00:51.929702   56587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 14:00:51.945986   56587 addons.go:423] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0717 14:00:51.946003   56587 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0717 14:00:52.029050   56587 addons.go:423] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0717 14:00:52.029072   56587 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0717 14:00:52.050718   56587 addons.go:423] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0717 14:00:52.050735   56587 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0717 14:00:52.131000   56587 addons.go:423] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0717 14:00:52.131014   56587 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0717 14:00:52.230184   56587 addons.go:423] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0717 14:00:52.230200   56587 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0717 14:00:52.254013   56587 addons.go:423] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0717 14:00:52.254038   56587 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0717 14:00:52.339075   56587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0717 14:00:52.844613   56587 addons.go:467] Verifying addon metrics-server=true in "newest-cni-321000"
	I0717 14:00:53.347373   56587 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-321000 addons enable metrics-server	
	
	
	I0717 14:00:53.389373   56587 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0717 14:00:53.463385   56587 addons.go:502] enable addons completed in 2.12528826s: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0717 14:00:53.463493   56587 start.go:233] waiting for cluster config update ...
	I0717 14:00:53.463515   56587 start.go:242] writing updated cluster config ...
	I0717 14:00:53.464062   56587 ssh_runner.go:195] Run: rm -f paused
	I0717 14:00:53.503775   56587 start.go:578] kubectl: 1.27.2, cluster: 1.27.3 (minor skew: 0)
	I0717 14:00:53.525384   56587 out.go:177] * Done! kubectl is now configured to use "newest-cni-321000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* Jul 17 20:43:24 old-k8s-version-378000 dockerd[696]: time="2023-07-17T20:43:24.094039630Z" level=info msg="Loading containers: start."
	Jul 17 20:43:24 old-k8s-version-378000 dockerd[696]: time="2023-07-17T20:43:24.181177867Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jul 17 20:43:24 old-k8s-version-378000 dockerd[696]: time="2023-07-17T20:43:24.217420608Z" level=info msg="Loading containers: done."
	Jul 17 20:43:24 old-k8s-version-378000 dockerd[696]: time="2023-07-17T20:43:24.225759654Z" level=info msg="Docker daemon" commit=4ffc614 graphdriver=overlay2 version=24.0.4
	Jul 17 20:43:24 old-k8s-version-378000 dockerd[696]: time="2023-07-17T20:43:24.225825774Z" level=info msg="Daemon has completed initialization"
	Jul 17 20:43:24 old-k8s-version-378000 dockerd[696]: time="2023-07-17T20:43:24.255125966Z" level=info msg="API listen on [::]:2376"
	Jul 17 20:43:24 old-k8s-version-378000 dockerd[696]: time="2023-07-17T20:43:24.255164231Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 17 20:43:24 old-k8s-version-378000 systemd[1]: Started Docker Application Container Engine.
	Jul 17 20:43:31 old-k8s-version-378000 systemd[1]: Stopping Docker Application Container Engine...
	Jul 17 20:43:31 old-k8s-version-378000 dockerd[696]: time="2023-07-17T20:43:31.581664014Z" level=info msg="Processing signal 'terminated'"
	Jul 17 20:43:31 old-k8s-version-378000 dockerd[696]: time="2023-07-17T20:43:31.582662043Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 17 20:43:31 old-k8s-version-378000 dockerd[696]: time="2023-07-17T20:43:31.582919471Z" level=info msg="Daemon shutdown complete"
	Jul 17 20:43:31 old-k8s-version-378000 systemd[1]: docker.service: Deactivated successfully.
	Jul 17 20:43:31 old-k8s-version-378000 systemd[1]: Stopped Docker Application Container Engine.
	Jul 17 20:43:31 old-k8s-version-378000 systemd[1]: Starting Docker Application Container Engine...
	Jul 17 20:43:31 old-k8s-version-378000 dockerd[919]: time="2023-07-17T20:43:31.637367128Z" level=info msg="Starting up"
	Jul 17 20:43:31 old-k8s-version-378000 dockerd[919]: time="2023-07-17T20:43:31.768639802Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 17 20:43:31 old-k8s-version-378000 dockerd[919]: time="2023-07-17T20:43:31.930935601Z" level=info msg="Loading containers: start."
	Jul 17 20:43:32 old-k8s-version-378000 dockerd[919]: time="2023-07-17T20:43:32.076080810Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jul 17 20:43:32 old-k8s-version-378000 dockerd[919]: time="2023-07-17T20:43:32.150208855Z" level=info msg="Loading containers: done."
	Jul 17 20:43:32 old-k8s-version-378000 dockerd[919]: time="2023-07-17T20:43:32.158924505Z" level=info msg="Docker daemon" commit=4ffc614 graphdriver=overlay2 version=24.0.4
	Jul 17 20:43:32 old-k8s-version-378000 dockerd[919]: time="2023-07-17T20:43:32.158986647Z" level=info msg="Daemon has completed initialization"
	Jul 17 20:43:32 old-k8s-version-378000 dockerd[919]: time="2023-07-17T20:43:32.188593160Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 17 20:43:32 old-k8s-version-378000 dockerd[919]: time="2023-07-17T20:43:32.188664710Z" level=info msg="API listen on [::]:2376"
	Jul 17 20:43:32 old-k8s-version-378000 systemd[1]: Started Docker Application Container Engine.
	
	* 
	* ==> container status <==
	* time="2023-07-17T21:07:03Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/dockershim.sock: connect: no such file or directory\""
	CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> kernel <==
	*  21:07:03 up  5:05,  0 users,  load average: 0.00, 0.30, 0.69
	Linux old-k8s-version-378000 5.15.49-linuxkit-pr #1 SMP Thu May 25 07:17:40 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.2 LTS"
	
	* 
	* ==> kubelet <==
	* Jul 17 21:07:01 old-k8s-version-378000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jul 17 21:07:02 old-k8s-version-378000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1381.
	Jul 17 21:07:02 old-k8s-version-378000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jul 17 21:07:02 old-k8s-version-378000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jul 17 21:07:02 old-k8s-version-378000 kubelet[32477]: I0717 21:07:02.394914   32477 server.go:410] Version: v1.16.0
	Jul 17 21:07:02 old-k8s-version-378000 kubelet[32477]: I0717 21:07:02.395248   32477 plugins.go:100] No cloud provider specified.
	Jul 17 21:07:02 old-k8s-version-378000 kubelet[32477]: I0717 21:07:02.395288   32477 server.go:773] Client rotation is on, will bootstrap in background
	Jul 17 21:07:02 old-k8s-version-378000 kubelet[32477]: I0717 21:07:02.398626   32477 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jul 17 21:07:02 old-k8s-version-378000 kubelet[32477]: W0717 21:07:02.399246   32477 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Jul 17 21:07:02 old-k8s-version-378000 kubelet[32477]: W0717 21:07:02.399311   32477 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Jul 17 21:07:02 old-k8s-version-378000 kubelet[32477]: F0717 21:07:02.399335   32477 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Jul 17 21:07:02 old-k8s-version-378000 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jul 17 21:07:02 old-k8s-version-378000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jul 17 21:07:03 old-k8s-version-378000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1382.
	Jul 17 21:07:03 old-k8s-version-378000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jul 17 21:07:03 old-k8s-version-378000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jul 17 21:07:03 old-k8s-version-378000 kubelet[32538]: I0717 21:07:03.158244   32538 server.go:410] Version: v1.16.0
	Jul 17 21:07:03 old-k8s-version-378000 kubelet[32538]: I0717 21:07:03.158540   32538 plugins.go:100] No cloud provider specified.
	Jul 17 21:07:03 old-k8s-version-378000 kubelet[32538]: I0717 21:07:03.158552   32538 server.go:773] Client rotation is on, will bootstrap in background
	Jul 17 21:07:03 old-k8s-version-378000 kubelet[32538]: I0717 21:07:03.162276   32538 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jul 17 21:07:03 old-k8s-version-378000 kubelet[32538]: W0717 21:07:03.162846   32538 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Jul 17 21:07:03 old-k8s-version-378000 kubelet[32538]: W0717 21:07:03.162910   32538 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Jul 17 21:07:03 old-k8s-version-378000 kubelet[32538]: F0717 21:07:03.162934   32538 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Jul 17 21:07:03 old-k8s-version-378000 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jul 17 21:07:03 old-k8s-version-378000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 14:07:03.493960   57110 logs.go:195] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-378000 -n old-k8s-version-378000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-378000 -n old-k8s-version-378000: exit status 2 (358.773995ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-378000" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (376.73s)

                                                
                                    

Test pass (283/317)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 16.49
4 TestDownloadOnly/v1.16.0/preload-exists 0
7 TestDownloadOnly/v1.16.0/kubectl 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.28
10 TestDownloadOnly/v1.27.3/json-events 12.87
11 TestDownloadOnly/v1.27.3/preload-exists 0
14 TestDownloadOnly/v1.27.3/kubectl 0
15 TestDownloadOnly/v1.27.3/LogsDuration 0.28
16 TestDownloadOnly/DeleteAll 0.61
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.35
18 TestDownloadOnlyKic 1.91
19 TestBinaryMirror 1.55
20 TestOffline 54.64
22 TestAddons/Setup 203.4
26 TestAddons/parallel/InspektorGadget 10.85
27 TestAddons/parallel/MetricsServer 6.08
28 TestAddons/parallel/HelmTiller 10.92
30 TestAddons/parallel/CSI 52.92
31 TestAddons/parallel/Headlamp 15.65
32 TestAddons/parallel/CloudSpanner 5.67
35 TestAddons/serial/GCPAuth/Namespaces 0.1
36 TestAddons/StoppedEnableDisable 11.73
37 TestCertOptions 25.27
38 TestCertExpiration 231.7
39 TestDockerFlags 26.62
40 TestForceSystemdFlag 26.2
41 TestForceSystemdEnv 25.94
44 TestHyperKitDriverInstallOrUpdate 6.88
48 TestErrorSpam/start 1.98
49 TestErrorSpam/status 1.13
50 TestErrorSpam/pause 1.68
51 TestErrorSpam/unpause 1.73
52 TestErrorSpam/stop 11.38
55 TestFunctional/serial/CopySyncFile 0
56 TestFunctional/serial/StartWithProxy 49.45
57 TestFunctional/serial/AuditLog 0
58 TestFunctional/serial/SoftStart 36.74
59 TestFunctional/serial/KubeContext 0.04
60 TestFunctional/serial/KubectlGetPods 0.06
63 TestFunctional/serial/CacheCmd/cache/add_remote 7.06
64 TestFunctional/serial/CacheCmd/cache/add_local 1.56
65 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.07
66 TestFunctional/serial/CacheCmd/cache/list 0.07
67 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.4
68 TestFunctional/serial/CacheCmd/cache/cache_reload 2.47
69 TestFunctional/serial/CacheCmd/cache/delete 0.13
70 TestFunctional/serial/MinikubeKubectlCmd 0.55
71 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.72
72 TestFunctional/serial/ExtraConfig 39.68
73 TestFunctional/serial/ComponentHealth 0.06
74 TestFunctional/serial/LogsCmd 3.57
75 TestFunctional/serial/LogsFileCmd 3.27
76 TestFunctional/serial/InvalidService 4.24
78 TestFunctional/parallel/ConfigCmd 0.46
79 TestFunctional/parallel/DashboardCmd 16.13
80 TestFunctional/parallel/DryRun 1.34
81 TestFunctional/parallel/InternationalLanguage 0.63
82 TestFunctional/parallel/StatusCmd 1.14
87 TestFunctional/parallel/AddonsCmd 0.26
88 TestFunctional/parallel/PersistentVolumeClaim 32.55
90 TestFunctional/parallel/SSHCmd 0.8
91 TestFunctional/parallel/CpCmd 1.61
92 TestFunctional/parallel/MySQL 39.71
93 TestFunctional/parallel/FileSync 0.4
94 TestFunctional/parallel/CertSync 2.33
98 TestFunctional/parallel/NodeLabels 0.1
100 TestFunctional/parallel/NonActiveRuntimeDisabled 0.34
102 TestFunctional/parallel/License 0.78
103 TestFunctional/parallel/UpdateContextCmd/no_changes 0.33
104 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.28
105 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.27
106 TestFunctional/parallel/Version/short 0.09
107 TestFunctional/parallel/Version/components 0.77
108 TestFunctional/parallel/ImageCommands/ImageListShort 0.28
109 TestFunctional/parallel/ImageCommands/ImageListTable 0.28
110 TestFunctional/parallel/ImageCommands/ImageListJson 0.28
111 TestFunctional/parallel/ImageCommands/ImageListYaml 0.28
112 TestFunctional/parallel/ImageCommands/ImageBuild 2.95
113 TestFunctional/parallel/ImageCommands/Setup 3.4
114 TestFunctional/parallel/DockerEnv/bash 2.65
115 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.34
116 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.68
117 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 7.6
118 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.64
119 TestFunctional/parallel/ImageCommands/ImageRemove 0.76
120 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 2.8
121 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 2.88
123 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.6
124 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
126 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 14.28
127 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.05
128 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
132 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
133 TestFunctional/parallel/MountCmd/any-port 7.73
134 TestFunctional/parallel/MountCmd/specific-port 2.46
135 TestFunctional/parallel/MountCmd/VerifyCleanup 2.96
136 TestFunctional/parallel/ServiceCmd/DeployApp 8.12
137 TestFunctional/parallel/ProfileCmd/profile_not_create 0.47
138 TestFunctional/parallel/ProfileCmd/profile_list 0.44
139 TestFunctional/parallel/ProfileCmd/profile_json_output 0.44
140 TestFunctional/parallel/ServiceCmd/List 1.85
141 TestFunctional/parallel/ServiceCmd/JSONOutput 1.81
142 TestFunctional/parallel/ServiceCmd/HTTPS 15
143 TestFunctional/parallel/ServiceCmd/Format 15
144 TestFunctional/parallel/ServiceCmd/URL 15
145 TestFunctional/delete_addon-resizer_images 0.14
146 TestFunctional/delete_my-image_image 0.06
147 TestFunctional/delete_minikube_cached_images 0.05
151 TestImageBuild/serial/Setup 21.24
152 TestImageBuild/serial/NormalBuild 2.29
153 TestImageBuild/serial/BuildWithBuildArg 0.83
154 TestImageBuild/serial/BuildWithDockerIgnore 0.66
155 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.64
165 TestJSONOutput/start/Command 49.32
166 TestJSONOutput/start/Audit 0
168 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
169 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
171 TestJSONOutput/pause/Command 0.56
172 TestJSONOutput/pause/Audit 0
174 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
175 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
177 TestJSONOutput/unpause/Command 0.58
178 TestJSONOutput/unpause/Audit 0
180 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
181 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
183 TestJSONOutput/stop/Command 5.74
184 TestJSONOutput/stop/Audit 0
186 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
188 TestErrorJSONOutput 0.69
190 TestKicCustomNetwork/create_custom_network 24.1
191 TestKicCustomNetwork/use_default_bridge_network 23.72
192 TestKicExistingNetwork 23.98
193 TestKicCustomSubnet 23.44
194 TestKicStaticIP 24.22
195 TestMainNoArgs 0.06
196 TestMinikubeProfile 50
199 TestMountStart/serial/StartWithMountFirst 7.75
200 TestMountStart/serial/VerifyMountFirst 0.37
201 TestMountStart/serial/StartWithMountSecond 7.93
202 TestMountStart/serial/VerifyMountSecond 0.36
203 TestMountStart/serial/DeleteFirst 2.02
204 TestMountStart/serial/VerifyMountPostDelete 0.36
205 TestMountStart/serial/Stop 1.53
206 TestMountStart/serial/RestartStopped 9.06
207 TestMountStart/serial/VerifyMountPostStop 0.37
210 TestMultiNode/serial/FreshStart2Nodes 63.5
211 TestMultiNode/serial/DeployApp2Nodes 46.76
212 TestMultiNode/serial/PingHostFrom2Pods 0.85
213 TestMultiNode/serial/AddNode 15.39
214 TestMultiNode/serial/ProfileList 0.39
215 TestMultiNode/serial/CopyFile 13.08
216 TestMultiNode/serial/StopNode 2.82
217 TestMultiNode/serial/StartAfterStop 12.97
218 TestMultiNode/serial/RestartKeepsNodes 97.15
219 TestMultiNode/serial/DeleteNode 5.76
220 TestMultiNode/serial/StopMultiNode 21.76
221 TestMultiNode/serial/RestartMultiNode 56.87
222 TestMultiNode/serial/ValidateNameConflict 25.55
226 TestPreload 157.82
228 TestScheduledStopUnix 95.48
229 TestSkaffold 116.69
231 TestInsufficientStorage 10.69
247 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 11.71
248 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 14.76
249 TestStoppedBinaryUpgrade/Setup 2.05
251 TestStoppedBinaryUpgrade/MinikubeLogs 3.49
253 TestPause/serial/Start 49.16
254 TestPause/serial/SecondStartNoReconfiguration 35.79
255 TestPause/serial/Pause 0.67
256 TestPause/serial/VerifyStatus 0.38
257 TestPause/serial/Unpause 0.63
258 TestPause/serial/PauseAgain 0.7
259 TestPause/serial/DeletePaused 2.44
260 TestPause/serial/VerifyDeletedResources 0.51
269 TestNoKubernetes/serial/StartNoK8sWithVersion 0.4
270 TestNoKubernetes/serial/StartWithK8s 22.06
271 TestNoKubernetes/serial/StartWithStopK8s 8.67
272 TestNoKubernetes/serial/Start 7.76
273 TestNoKubernetes/serial/VerifyK8sNotRunning 0.34
274 TestNoKubernetes/serial/ProfileList 1.22
275 TestNoKubernetes/serial/Stop 1.53
276 TestNoKubernetes/serial/StartNoArgs 9.08
277 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.37
278 TestNetworkPlugins/group/auto/Start 50.27
279 TestNetworkPlugins/group/auto/KubeletFlags 0.36
280 TestNetworkPlugins/group/auto/NetCatPod 11.27
281 TestNetworkPlugins/group/auto/DNS 0.13
282 TestNetworkPlugins/group/auto/Localhost 0.11
283 TestNetworkPlugins/group/auto/HairPin 0.1
284 TestNetworkPlugins/group/kindnet/Start 50.26
285 TestNetworkPlugins/group/kindnet/ControllerPod 5.02
286 TestNetworkPlugins/group/kindnet/KubeletFlags 0.36
287 TestNetworkPlugins/group/kindnet/NetCatPod 11.25
288 TestNetworkPlugins/group/kindnet/DNS 0.13
289 TestNetworkPlugins/group/kindnet/Localhost 0.12
290 TestNetworkPlugins/group/kindnet/HairPin 0.11
291 TestNetworkPlugins/group/calico/Start 64.76
292 TestNetworkPlugins/group/calico/ControllerPod 5.02
293 TestNetworkPlugins/group/calico/KubeletFlags 0.36
294 TestNetworkPlugins/group/calico/NetCatPod 13.27
295 TestNetworkPlugins/group/calico/DNS 0.14
296 TestNetworkPlugins/group/calico/Localhost 0.13
297 TestNetworkPlugins/group/calico/HairPin 0.12
298 TestNetworkPlugins/group/custom-flannel/Start 50.53
299 TestNetworkPlugins/group/false/Start 36.91
300 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.38
301 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.31
302 TestNetworkPlugins/group/false/KubeletFlags 0.37
303 TestNetworkPlugins/group/false/NetCatPod 13.27
304 TestNetworkPlugins/group/custom-flannel/DNS 0.13
305 TestNetworkPlugins/group/custom-flannel/Localhost 0.12
306 TestNetworkPlugins/group/custom-flannel/HairPin 0.12
307 TestNetworkPlugins/group/false/DNS 0.14
308 TestNetworkPlugins/group/false/Localhost 0.12
309 TestNetworkPlugins/group/false/HairPin 0.11
310 TestNetworkPlugins/group/enable-default-cni/Start 37.8
311 TestNetworkPlugins/group/flannel/Start 50.58
312 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.37
313 TestNetworkPlugins/group/enable-default-cni/NetCatPod 13.28
314 TestNetworkPlugins/group/enable-default-cni/DNS 0.14
315 TestNetworkPlugins/group/enable-default-cni/Localhost 0.11
316 TestNetworkPlugins/group/enable-default-cni/HairPin 0.11
317 TestNetworkPlugins/group/flannel/ControllerPod 5.02
318 TestNetworkPlugins/group/flannel/KubeletFlags 0.39
319 TestNetworkPlugins/group/flannel/NetCatPod 12.27
320 TestNetworkPlugins/group/flannel/DNS 0.12
321 TestNetworkPlugins/group/flannel/Localhost 0.14
322 TestNetworkPlugins/group/flannel/HairPin 0.12
323 TestNetworkPlugins/group/bridge/Start 36.82
324 TestNetworkPlugins/group/kubenet/Start 45.97
325 TestNetworkPlugins/group/bridge/KubeletFlags 0.39
326 TestNetworkPlugins/group/bridge/NetCatPod 13.28
327 TestNetworkPlugins/group/bridge/DNS 0.13
328 TestNetworkPlugins/group/bridge/Localhost 0.12
329 TestNetworkPlugins/group/bridge/HairPin 0.11
330 TestNetworkPlugins/group/kubenet/KubeletFlags 0.39
331 TestNetworkPlugins/group/kubenet/NetCatPod 13.31
334 TestNetworkPlugins/group/kubenet/DNS 0.13
335 TestNetworkPlugins/group/kubenet/Localhost 0.11
336 TestNetworkPlugins/group/kubenet/HairPin 0.11
338 TestStartStop/group/no-preload/serial/FirstStart 78.63
339 TestStartStop/group/no-preload/serial/DeployApp 9.32
340 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.18
341 TestStartStop/group/no-preload/serial/Stop 10.83
342 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.42
343 TestStartStop/group/no-preload/serial/SecondStart 332.94
346 TestStartStop/group/old-k8s-version/serial/Stop 1.52
347 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.41
349 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 14.02
350 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
351 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.41
352 TestStartStop/group/no-preload/serial/Pause 2.94
354 TestStartStop/group/embed-certs/serial/FirstStart 50.53
355 TestStartStop/group/embed-certs/serial/DeployApp 10.33
356 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.18
357 TestStartStop/group/embed-certs/serial/Stop 10.9
358 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.4
359 TestStartStop/group/embed-certs/serial/SecondStart 337.07
361 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 18.02
362 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.08
363 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.41
364 TestStartStop/group/embed-certs/serial/Pause 2.94
366 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 50.1
367 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.33
368 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.23
369 TestStartStop/group/default-k8s-diff-port/serial/Stop 10.86
370 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.4
371 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 328.38
372 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 14.01
373 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.08
374 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.41
375 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.04
377 TestStartStop/group/newest-cni/serial/FirstStart 34.6
378 TestStartStop/group/newest-cni/serial/DeployApp 0
379 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.26
380 TestStartStop/group/newest-cni/serial/Stop 11.38
381 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.4
382 TestStartStop/group/newest-cni/serial/SecondStart 29.27
384 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
385 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
386 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.42
387 TestStartStop/group/newest-cni/serial/Pause 3.64
x
+
TestDownloadOnly/v1.16.0/json-events (16.49s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-589000 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:69: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-589000 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker : (16.484974032s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (16.49s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
--- PASS: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.28s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-589000
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-589000: exit status 85 (282.760125ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-589000 | jenkins | v1.30.1 | 17 Jul 23 12:43 PDT |          |
	|         | -p download-only-589000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/07/17 12:43:35
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.20.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 12:43:35.426688   38327 out.go:296] Setting OutFile to fd 1 ...
	I0717 12:43:35.426868   38327 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 12:43:35.426875   38327 out.go:309] Setting ErrFile to fd 2...
	I0717 12:43:35.426879   38327 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 12:43:35.427079   38327 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/16890-37879/.minikube/bin
	W0717 12:43:35.427179   38327 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/16890-37879/.minikube/config/config.json: open /Users/jenkins/minikube-integration/16890-37879/.minikube/config/config.json: no such file or directory
	I0717 12:43:35.428988   38327 out.go:303] Setting JSON to true
	I0717 12:43:35.448721   38327 start.go:128] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":13386,"bootTime":1689609629,"procs":401,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0717 12:43:35.448808   38327 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0717 12:43:35.470321   38327 out.go:97] [download-only-589000] minikube v1.30.1 on Darwin 13.4.1
	W0717 12:43:35.470552   38327 preload.go:295] Failed to list preload files: open /Users/jenkins/minikube-integration/16890-37879/.minikube/cache/preloaded-tarball: no such file or directory
	I0717 12:43:35.470559   38327 notify.go:220] Checking for updates...
	I0717 12:43:35.492121   38327 out.go:169] MINIKUBE_LOCATION=16890
	I0717 12:43:35.513265   38327 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/16890-37879/kubeconfig
	I0717 12:43:35.535182   38327 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0717 12:43:35.557171   38327 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 12:43:35.578134   38327 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/16890-37879/.minikube
	W0717 12:43:35.619816   38327 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0717 12:43:35.620095   38327 driver.go:373] Setting default libvirt URI to qemu:///system
	I0717 12:43:35.674825   38327 docker.go:121] docker version: linux-24.0.2:Docker Desktop 4.21.1 (114176)
	I0717 12:43:35.674921   38327 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 12:43:35.772363   38327 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:false NGoroutines:63 SystemTime:2023-07-17 19:43:35.761320735 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:5.15.49-linuxkit-pr OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexS
erverAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=built
in name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Pat
h:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.6] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Command line tool for Docker Scout Vendor:Docker Inc. Version:0.16.1]] Warnings:<n
il>}}
	I0717 12:43:35.793742   38327 out.go:97] Using the docker driver based on user configuration
	I0717 12:43:35.793792   38327 start.go:298] selected driver: docker
	I0717 12:43:35.793807   38327 start.go:880] validating driver "docker" against <nil>
	I0717 12:43:35.794079   38327 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 12:43:35.892771   38327 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:false NGoroutines:63 SystemTime:2023-07-17 19:43:35.882182506 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:5.15.49-linuxkit-pr OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexS
erverAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=built
in name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Pat
h:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.6] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Command line tool for Docker Scout Vendor:Docker Inc. Version:0.16.1]] Warnings:<n
il>}}
	I0717 12:43:35.892941   38327 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0717 12:43:35.895435   38327 start_flags.go:382] Using suggested 5895MB memory alloc based on sys=32768MB, container=5943MB
	I0717 12:43:35.895575   38327 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
	I0717 12:43:35.917057   38327 out.go:169] Using Docker Desktop driver with root privileges
	I0717 12:43:35.937893   38327 cni.go:84] Creating CNI manager for ""
	I0717 12:43:35.937982   38327 cni.go:168] CNI unnecessary in this configuration, recommending no CNI
	I0717 12:43:35.938006   38327 start_flags.go:319] config:
	{Name:download-only-589000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:5895 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-589000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket:
NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 12:43:35.959882   38327 out.go:97] Starting control plane node download-only-589000 in cluster download-only-589000
	I0717 12:43:35.959954   38327 cache.go:122] Beginning downloading kic base image for docker with docker
	I0717 12:43:35.980862   38327 out.go:97] Pulling base image ...
	I0717 12:43:35.981017   38327 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0717 12:43:35.981089   38327 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon
	I0717 12:43:36.031574   38327 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 to local cache
	I0717 12:43:36.031791   38327 image.go:63] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local cache directory
	I0717 12:43:36.031909   38327 image.go:118] Writing gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 to local cache
	I0717 12:43:36.077189   38327 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0717 12:43:36.077222   38327 cache.go:57] Caching tarball of preloaded images
	I0717 12:43:36.077549   38327 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0717 12:43:36.100620   38327 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0717 12:43:36.100693   38327 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0717 12:43:36.303772   38327 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4?checksum=md5:326f3ce331abb64565b50b8c9e791244 -> /Users/jenkins/minikube-integration/16890-37879/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0717 12:43:48.408140   38327 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0717 12:43:48.408286   38327 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/16890-37879/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0717 12:43:48.956810   38327 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0717 12:43:48.957018   38327 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/download-only-589000/config.json ...
	I0717 12:43:48.957046   38327 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/download-only-589000/config.json: {Name:mk315bca21434ecfab0eb4a5f54feccc7a2fbe0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 12:43:48.957316   38327 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0717 12:43:48.957584   38327 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/amd64/kubectl.sha1 -> /Users/jenkins/minikube-integration/16890-37879/.minikube/cache/darwin/amd64/v1.16.0/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-589000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.28s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/json-events (12.87s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-589000 --force --alsologtostderr --kubernetes-version=v1.27.3 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:69: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-589000 --force --alsologtostderr --kubernetes-version=v1.27.3 --container-runtime=docker --driver=docker : (12.873975578s)
--- PASS: TestDownloadOnly/v1.27.3/json-events (12.87s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/preload-exists
--- PASS: TestDownloadOnly/v1.27.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/kubectl
--- PASS: TestDownloadOnly/v1.27.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/LogsDuration (0.28s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-589000
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-589000: exit status 85 (276.717538ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-589000 | jenkins | v1.30.1 | 17 Jul 23 12:43 PDT |          |
	|         | -p download-only-589000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-589000 | jenkins | v1.30.1 | 17 Jul 23 12:43 PDT |          |
	|         | -p download-only-589000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.27.3   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/07/17 12:43:52
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.20.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 12:43:52.196586   38364 out.go:296] Setting OutFile to fd 1 ...
	I0717 12:43:52.196791   38364 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 12:43:52.196796   38364 out.go:309] Setting ErrFile to fd 2...
	I0717 12:43:52.196800   38364 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 12:43:52.197017   38364 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/16890-37879/.minikube/bin
	W0717 12:43:52.197110   38364 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/16890-37879/.minikube/config/config.json: open /Users/jenkins/minikube-integration/16890-37879/.minikube/config/config.json: no such file or directory
	I0717 12:43:52.198668   38364 out.go:303] Setting JSON to true
	I0717 12:43:52.217963   38364 start.go:128] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":13403,"bootTime":1689609629,"procs":401,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0717 12:43:52.218041   38364 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0717 12:43:52.241170   38364 out.go:97] [download-only-589000] minikube v1.30.1 on Darwin 13.4.1
	I0717 12:43:52.241387   38364 notify.go:220] Checking for updates...
	I0717 12:43:52.263544   38364 out.go:169] MINIKUBE_LOCATION=16890
	I0717 12:43:52.284442   38364 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/16890-37879/kubeconfig
	I0717 12:43:52.306574   38364 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0717 12:43:52.328726   38364 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 12:43:52.350330   38364 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/16890-37879/.minikube
	W0717 12:43:52.392542   38364 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0717 12:43:52.393250   38364 config.go:182] Loaded profile config "download-only-589000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	W0717 12:43:52.393338   38364 start.go:788] api.Load failed for download-only-589000: filestore "download-only-589000": Docker machine "download-only-589000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0717 12:43:52.393502   38364 driver.go:373] Setting default libvirt URI to qemu:///system
	W0717 12:43:52.393540   38364 start.go:788] api.Load failed for download-only-589000: filestore "download-only-589000": Docker machine "download-only-589000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0717 12:43:52.447394   38364 docker.go:121] docker version: linux-24.0.2:Docker Desktop 4.21.1 (114176)
	I0717 12:43:52.447512   38364 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 12:43:52.542418   38364 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:false NGoroutines:63 SystemTime:2023-07-17 19:43:52.531396506 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:5.15.49-linuxkit-pr OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexS
erverAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=built
in name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Pat
h:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.6] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Command line tool for Docker Scout Vendor:Docker Inc. Version:0.16.1]] Warnings:<n
il>}}
	I0717 12:43:52.564071   38364 out.go:97] Using the docker driver based on existing profile
	I0717 12:43:52.564141   38364 start.go:298] selected driver: docker
	I0717 12:43:52.564156   38364 start.go:880] validating driver "docker" against &{Name:download-only-589000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:5895 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-589000 Namespace:default APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSH
AuthSock: SSHAgentPID:0}
	I0717 12:43:52.564464   38364 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 12:43:52.663729   38364 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:false NGoroutines:63 SystemTime:2023-07-17 19:43:52.653385578 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:5.15.49-linuxkit-pr OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexS
erverAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=built
in name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Pat
h:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.6] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Command line tool for Docker Scout Vendor:Docker Inc. Version:0.16.1]] Warnings:<n
il>}}
	I0717 12:43:52.666353   38364 cni.go:84] Creating CNI manager for ""
	I0717 12:43:52.666382   38364 cni.go:149] "docker" driver + "docker" runtime found, recommending kindnet
	I0717 12:43:52.666395   38364 start_flags.go:319] config:
	{Name:download-only-589000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:5895 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:download-only-589000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket:
NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 12:43:52.688192   38364 out.go:97] Starting control plane node download-only-589000 in cluster download-only-589000
	I0717 12:43:52.688295   38364 cache.go:122] Beginning downloading kic base image for docker with docker
	I0717 12:43:52.710082   38364 out.go:97] Pulling base image ...
	I0717 12:43:52.710218   38364 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0717 12:43:52.710293   38364 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon
	I0717 12:43:52.759826   38364 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 to local cache
	I0717 12:43:52.759942   38364 image.go:63] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local cache directory
	I0717 12:43:52.759963   38364 image.go:66] Found gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local cache directory, skipping pull
	I0717 12:43:52.759968   38364 image.go:105] gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 exists in cache, skipping pull
	I0717 12:43:52.759975   38364 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 as a tarball
	I0717 12:43:52.799687   38364 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.27.3/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-amd64.tar.lz4
	I0717 12:43:52.799713   38364 cache.go:57] Caching tarball of preloaded images
	I0717 12:43:52.800054   38364 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0717 12:43:52.822068   38364 out.go:97] Downloading Kubernetes v1.27.3 preload ...
	I0717 12:43:52.822146   38364 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.27.3-docker-overlay2-amd64.tar.lz4 ...
	I0717 12:43:53.024721   38364 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.27.3/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-amd64.tar.lz4?checksum=md5:90b30902fa911e3bcfdde5b24cedf0b2 -> /Users/jenkins/minikube-integration/16890-37879/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-amd64.tar.lz4
	I0717 12:44:01.912073   38364 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.27.3-docker-overlay2-amd64.tar.lz4 ...
	I0717 12:44:01.912297   38364 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/16890-37879/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-amd64.tar.lz4 ...
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-589000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.27.3/LogsDuration (0.28s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.61s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:187: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.61s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.35s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:199: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-589000
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.35s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.91s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:222: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p download-docker-209000 --alsologtostderr --driver=docker 
helpers_test.go:175: Cleaning up "download-docker-209000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-docker-209000
--- PASS: TestDownloadOnlyKic (1.91s)

                                                
                                    
x
+
TestBinaryMirror (1.55s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p binary-mirror-729000 --alsologtostderr --binary-mirror http://127.0.0.1:54867 --driver=docker 
helpers_test.go:175: Cleaning up "binary-mirror-729000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p binary-mirror-729000
--- PASS: TestBinaryMirror (1.55s)

                                                
                                    
x
+
TestOffline (54.64s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 start -p offline-docker-655000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker 
aab_offline_test.go:55: (dbg) Done: out/minikube-darwin-amd64 start -p offline-docker-655000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker : (52.065682547s)
helpers_test.go:175: Cleaning up "offline-docker-655000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p offline-docker-655000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p offline-docker-655000: (2.573042006s)
--- PASS: TestOffline (54.64s)

                                                
                                    
x
+
TestAddons/Setup (203.4s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:88: (dbg) Run:  out/minikube-darwin-amd64 start -p addons-702000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=docker  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:88: (dbg) Done: out/minikube-darwin-amd64 start -p addons-702000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=docker  --addons=ingress --addons=ingress-dns --addons=helm-tiller: (3m23.40436218s)
--- PASS: TestAddons/Setup (203.40s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.85s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:814: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-rklvq" [c0940116-d4f7-4e75-bcc3-f108f04b3705] Running
addons_test.go:814: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.012063063s
addons_test.go:817: (dbg) Run:  out/minikube-darwin-amd64 addons disable inspektor-gadget -p addons-702000
addons_test.go:817: (dbg) Done: out/minikube-darwin-amd64 addons disable inspektor-gadget -p addons-702000: (5.836076603s)
--- PASS: TestAddons/parallel/InspektorGadget (10.85s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.08s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:383: metrics-server stabilized in 3.249256ms
addons_test.go:385: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-844d8db974-frrlj" [83544775-0197-462a-b940-9903911a86ce] Running
addons_test.go:385: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.008098761s
addons_test.go:391: (dbg) Run:  kubectl --context addons-702000 top pods -n kube-system
addons_test.go:408: (dbg) Run:  out/minikube-darwin-amd64 -p addons-702000 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:408: (dbg) Done: out/minikube-darwin-amd64 -p addons-702000 addons disable metrics-server --alsologtostderr -v=1: (1.012921972s)
--- PASS: TestAddons/parallel/MetricsServer (6.08s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (10.92s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:432: tiller-deploy stabilized in 30.810186ms
addons_test.go:434: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-6847666dc-fpcg9" [68b8131e-a8ed-4f05-816c-e86bd8f5fe23] Running
addons_test.go:434: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.009419271s
addons_test.go:449: (dbg) Run:  kubectl --context addons-702000 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:449: (dbg) Done: kubectl --context addons-702000 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (5.104909716s)
addons_test.go:466: (dbg) Run:  out/minikube-darwin-amd64 -p addons-702000 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (10.92s)

                                                
                                    
x
+
TestAddons/parallel/CSI (52.92s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:537: csi-hostpath-driver pods stabilized in 4.931875ms
addons_test.go:540: (dbg) Run:  kubectl --context addons-702000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:545: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-702000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-702000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-702000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:550: (dbg) Run:  kubectl --context addons-702000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:555: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [fcf3f09f-d475-4cd7-a151-d35febaeed84] Pending
helpers_test.go:344: "task-pv-pod" [fcf3f09f-d475-4cd7-a151-d35febaeed84] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [fcf3f09f-d475-4cd7-a151-d35febaeed84] Running
addons_test.go:555: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 16.010293463s
addons_test.go:560: (dbg) Run:  kubectl --context addons-702000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:565: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-702000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-702000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:570: (dbg) Run:  kubectl --context addons-702000 delete pod task-pv-pod
addons_test.go:570: (dbg) Done: kubectl --context addons-702000 delete pod task-pv-pod: (1.43017259s)
addons_test.go:576: (dbg) Run:  kubectl --context addons-702000 delete pvc hpvc
addons_test.go:582: (dbg) Run:  kubectl --context addons-702000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:587: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-702000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-702000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-702000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-702000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-702000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-702000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-702000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-702000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-702000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-702000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-702000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-702000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-702000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-702000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:592: (dbg) Run:  kubectl --context addons-702000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:597: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [ea5b81cb-d96a-4865-b994-7545ac195618] Pending
helpers_test.go:344: "task-pv-pod-restore" [ea5b81cb-d96a-4865-b994-7545ac195618] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [ea5b81cb-d96a-4865-b994-7545ac195618] Running
addons_test.go:597: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 9.009300831s
addons_test.go:602: (dbg) Run:  kubectl --context addons-702000 delete pod task-pv-pod-restore
addons_test.go:606: (dbg) Run:  kubectl --context addons-702000 delete pvc hpvc-restore
addons_test.go:610: (dbg) Run:  kubectl --context addons-702000 delete volumesnapshot new-snapshot-demo
addons_test.go:614: (dbg) Run:  out/minikube-darwin-amd64 -p addons-702000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:614: (dbg) Done: out/minikube-darwin-amd64 -p addons-702000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.80372845s)
addons_test.go:618: (dbg) Run:  out/minikube-darwin-amd64 -p addons-702000 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (52.92s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (15.65s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:800: (dbg) Run:  out/minikube-darwin-amd64 addons enable headlamp -p addons-702000 --alsologtostderr -v=1
addons_test.go:800: (dbg) Done: out/minikube-darwin-amd64 addons enable headlamp -p addons-702000 --alsologtostderr -v=1: (1.641474035s)
addons_test.go:805: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-66f6498c69-vv2sd" [7efa21e7-d611-47e8-abb2-95d01f066256] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-66f6498c69-vv2sd" [7efa21e7-d611-47e8-abb2-95d01f066256] Running
addons_test.go:805: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 14.010193445s
--- PASS: TestAddons/parallel/Headlamp (15.65s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.67s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:833: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-88647b4cb-njgwr" [a91c7b5a-0725-4b3b-aee5-a742f130dcd0] Running
addons_test.go:833: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.01447704s
addons_test.go:836: (dbg) Run:  out/minikube-darwin-amd64 addons disable cloud-spanner -p addons-702000
--- PASS: TestAddons/parallel/CloudSpanner (5.67s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.1s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:626: (dbg) Run:  kubectl --context addons-702000 create ns new-namespace
addons_test.go:640: (dbg) Run:  kubectl --context addons-702000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.10s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (11.73s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:148: (dbg) Run:  out/minikube-darwin-amd64 stop -p addons-702000
addons_test.go:148: (dbg) Done: out/minikube-darwin-amd64 stop -p addons-702000: (11.067002007s)
addons_test.go:152: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-702000
addons_test.go:156: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-702000
addons_test.go:161: (dbg) Run:  out/minikube-darwin-amd64 addons disable gvisor -p addons-702000
--- PASS: TestAddons/StoppedEnableDisable (11.73s)

                                                
                                    
x
+
TestCertOptions (25.27s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-options-472000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --apiserver-name=localhost
cert_options_test.go:49: (dbg) Done: out/minikube-darwin-amd64 start -p cert-options-472000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --apiserver-name=localhost: (21.922034872s)
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-amd64 -p cert-options-472000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-amd64 ssh -p cert-options-472000 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-472000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-options-472000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-options-472000: (2.554496706s)
--- PASS: TestCertOptions (25.27s)

                                                
                                    
x
+
TestCertExpiration (231.7s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-533000 --memory=2048 --cert-expiration=3m --driver=docker 
cert_options_test.go:123: (dbg) Done: out/minikube-darwin-amd64 start -p cert-expiration-533000 --memory=2048 --cert-expiration=3m --driver=docker : (22.516339995s)
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-533000 --memory=2048 --cert-expiration=8760h --driver=docker 
E0717 13:24:42.255667   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/skaffold-508000/client.crt: no such file or directory
E0717 13:24:52.497195   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/skaffold-508000/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Done: out/minikube-darwin-amd64 start -p cert-expiration-533000 --memory=2048 --cert-expiration=8760h --driver=docker : (26.748763574s)
helpers_test.go:175: Cleaning up "cert-expiration-533000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-expiration-533000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-expiration-533000: (2.433794467s)
--- PASS: TestCertExpiration (231.70s)

                                                
                                    
x
+
TestDockerFlags (26.62s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-flags-330000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker 
docker_test.go:51: (dbg) Done: out/minikube-darwin-amd64 start -p docker-flags-330000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker : (23.331708309s)
docker_test.go:56: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-330000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-330000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-330000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-flags-330000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-flags-330000: (2.518439754s)
--- PASS: TestDockerFlags (26.62s)

                                                
                                    
x
+
TestForceSystemdFlag (26.2s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-flag-011000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker 
docker_test.go:91: (dbg) Done: out/minikube-darwin-amd64 start -p force-systemd-flag-011000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker : (23.206558364s)
docker_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-flag-011000 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-011000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-flag-011000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-flag-011000: (2.564518836s)
--- PASS: TestForceSystemdFlag (26.20s)

                                                
                                    
x
+
TestForceSystemdEnv (25.94s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-env-320000 --memory=2048 --alsologtostderr -v=5 --driver=docker 
docker_test.go:155: (dbg) Done: out/minikube-darwin-amd64 start -p force-systemd-env-320000 --memory=2048 --alsologtostderr -v=5 --driver=docker : (23.04793231s)
docker_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-env-320000 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-320000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-env-320000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-env-320000: (2.466632155s)
--- PASS: TestForceSystemdEnv (25.94s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (6.88s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (6.88s)

                                                
                                    
x
+
TestErrorSpam/start (1.98s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-590000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-590000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-590000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-590000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-590000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-590000 start --dry-run
--- PASS: TestErrorSpam/start (1.98s)

                                                
                                    
x
+
TestErrorSpam/status (1.13s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-590000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-590000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-590000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-590000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-590000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-590000 status
--- PASS: TestErrorSpam/status (1.13s)

                                                
                                    
x
+
TestErrorSpam/pause (1.68s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-590000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-590000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-590000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-590000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-590000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-590000 pause
--- PASS: TestErrorSpam/pause (1.68s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.73s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-590000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-590000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-590000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-590000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-590000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-590000 unpause
--- PASS: TestErrorSpam/unpause (1.73s)

                                                
                                    
x
+
TestErrorSpam/stop (11.38s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-590000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-590000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-amd64 -p nospam-590000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-590000 stop: (10.788113388s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-590000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-590000 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-590000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-590000 stop
--- PASS: TestErrorSpam/stop (11.38s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /Users/jenkins/minikube-integration/16890-37879/.minikube/files/etc/test/nested/copy/38325/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (49.45s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-625000 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker 
functional_test.go:2230: (dbg) Done: out/minikube-darwin-amd64 start -p functional-625000 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker : (49.4508191s)
--- PASS: TestFunctional/serial/StartWithProxy (49.45s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (36.74s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-625000 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-darwin-amd64 start -p functional-625000 --alsologtostderr -v=8: (36.734472815s)
functional_test.go:659: soft start took 36.73499147s for "functional-625000" cluster.
--- PASS: TestFunctional/serial/SoftStart (36.74s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-625000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (7.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-darwin-amd64 -p functional-625000 cache add registry.k8s.io/pause:3.1: (2.410829051s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-darwin-amd64 -p functional-625000 cache add registry.k8s.io/pause:3.3: (2.549509173s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-darwin-amd64 -p functional-625000 cache add registry.k8s.io/pause:latest: (2.101687732s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (7.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.56s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-625000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalserialCacheCmdcacheadd_local2137100324/001
functional_test.go:1085: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 cache add minikube-local-cache-test:functional-625000
functional_test.go:1085: (dbg) Done: out/minikube-darwin-amd64 -p functional-625000 cache add minikube-local-cache-test:functional-625000: (1.102997751s)
functional_test.go:1090: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 cache delete minikube-local-cache-test:functional-625000
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-625000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.56s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-darwin-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.4s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.40s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.47s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-625000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (369.183579ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-darwin-amd64 -p functional-625000 cache reload: (1.336528012s)
functional_test.go:1159: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.47s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.55s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 kubectl -- --context functional-625000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.55s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.72s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-625000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.72s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (39.68s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-625000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-darwin-amd64 start -p functional-625000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (39.680763949s)
functional_test.go:757: restart took 39.680914694s for "functional-625000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (39.68s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-625000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (3.57s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 logs
functional_test.go:1232: (dbg) Done: out/minikube-darwin-amd64 -p functional-625000 logs: (3.567239348s)
--- PASS: TestFunctional/serial/LogsCmd (3.57s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (3.27s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 logs --file /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalserialLogsFileCmd140190458/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-darwin-amd64 -p functional-625000 logs --file /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalserialLogsFileCmd140190458/001/logs.txt: (3.26715923s)
--- PASS: TestFunctional/serial/LogsFileCmd (3.27s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.24s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-625000 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-darwin-amd64 service invalid-svc -p functional-625000
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-darwin-amd64 service invalid-svc -p functional-625000: exit status 115 (537.570864ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:31750 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-625000 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.24s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-625000 config get cpus: exit status 14 (41.304401ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-625000 config get cpus: exit status 14 (40.53567ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (16.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-625000 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-625000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 40731: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (16.13s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (1.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-625000 --dry-run --memory 250MB --alsologtostderr --driver=docker 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-625000 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (680.422139ms)

                                                
                                                
-- stdout --
	* [functional-625000] minikube v1.30.1 on Darwin 13.4.1
	  - MINIKUBE_LOCATION=16890
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16890-37879/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16890-37879/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 12:53:48.817306   40667 out.go:296] Setting OutFile to fd 1 ...
	I0717 12:53:48.817460   40667 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 12:53:48.817465   40667 out.go:309] Setting ErrFile to fd 2...
	I0717 12:53:48.817469   40667 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 12:53:48.817657   40667 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/16890-37879/.minikube/bin
	I0717 12:53:48.818890   40667 out.go:303] Setting JSON to false
	I0717 12:53:48.837994   40667 start.go:128] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":13999,"bootTime":1689609629,"procs":401,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0717 12:53:48.838073   40667 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0717 12:53:48.859979   40667 out.go:177] * [functional-625000] minikube v1.30.1 on Darwin 13.4.1
	I0717 12:53:48.928344   40667 out.go:177]   - MINIKUBE_LOCATION=16890
	I0717 12:53:48.902819   40667 notify.go:220] Checking for updates...
	I0717 12:53:48.970173   40667 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16890-37879/kubeconfig
	I0717 12:53:49.012189   40667 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0717 12:53:49.033110   40667 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 12:53:49.054207   40667 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16890-37879/.minikube
	I0717 12:53:49.112274   40667 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 12:53:49.133793   40667 config.go:182] Loaded profile config "functional-625000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0717 12:53:49.134553   40667 driver.go:373] Setting default libvirt URI to qemu:///system
	I0717 12:53:49.190440   40667 docker.go:121] docker version: linux-24.0.2:Docker Desktop 4.21.1 (114176)
	I0717 12:53:49.190556   40667 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 12:53:49.284420   40667 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:55 OomKillDisable:false NGoroutines:68 SystemTime:2023-07-17 19:53:49.274066426 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:5.15.49-linuxkit-pr OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexS
erverAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=built
in name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Pat
h:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.6] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Command line tool for Docker Scout Vendor:Docker Inc. Version:0.16.1]] Warnings:<n
il>}}
	I0717 12:53:49.326018   40667 out.go:177] * Using the docker driver based on existing profile
	I0717 12:53:49.347281   40667 start.go:298] selected driver: docker
	I0717 12:53:49.347343   40667 start.go:880] validating driver "docker" against &{Name:functional-625000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:functional-625000 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mo
untMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 12:53:49.347466   40667 start.go:891] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 12:53:49.372211   40667 out.go:177] 
	W0717 12:53:49.393212   40667 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0717 12:53:49.414351   40667 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-625000 --dry-run --alsologtostderr -v=1 --driver=docker 
--- PASS: TestFunctional/parallel/DryRun (1.34s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-625000 --dry-run --memory 250MB --alsologtostderr --driver=docker 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-625000 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (629.405147ms)

                                                
                                                
-- stdout --
	* [functional-625000] minikube v1.30.1 sur Darwin 13.4.1
	  - MINIKUBE_LOCATION=16890
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16890-37879/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16890-37879/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 12:53:47.045220   40621 out.go:296] Setting OutFile to fd 1 ...
	I0717 12:53:47.045479   40621 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 12:53:47.045485   40621 out.go:309] Setting ErrFile to fd 2...
	I0717 12:53:47.045489   40621 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 12:53:47.045710   40621 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/16890-37879/.minikube/bin
	I0717 12:53:47.047257   40621 out.go:303] Setting JSON to false
	I0717 12:53:47.066769   40621 start.go:128] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":13998,"bootTime":1689609629,"procs":401,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0717 12:53:47.066863   40621 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0717 12:53:47.103731   40621 out.go:177] * [functional-625000] minikube v1.30.1 sur Darwin 13.4.1
	I0717 12:53:47.163383   40621 notify.go:220] Checking for updates...
	I0717 12:53:47.184450   40621 out.go:177]   - MINIKUBE_LOCATION=16890
	I0717 12:53:47.205580   40621 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16890-37879/kubeconfig
	I0717 12:53:47.226329   40621 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0717 12:53:47.247582   40621 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 12:53:47.269403   40621 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16890-37879/.minikube
	I0717 12:53:47.290503   40621 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 12:53:47.311666   40621 config.go:182] Loaded profile config "functional-625000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0717 12:53:47.312085   40621 driver.go:373] Setting default libvirt URI to qemu:///system
	I0717 12:53:47.366624   40621 docker.go:121] docker version: linux-24.0.2:Docker Desktop 4.21.1 (114176)
	I0717 12:53:47.366746   40621 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 12:53:47.463312   40621 info.go:266] docker info: {ID:9dd12a49-41d2-44e8-aa64-4ab7fa99394e Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:55 OomKillDisable:false NGoroutines:68 SystemTime:2023-07-17 19:53:47.452575281 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:5.15.49-linuxkit-pr OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexS
erverAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=built
in name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Pat
h:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.6] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Command line tool for Docker Scout Vendor:Docker Inc. Version:0.16.1]] Warnings:<n
il>}}
	I0717 12:53:47.505637   40621 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0717 12:53:47.526730   40621 start.go:298] selected driver: docker
	I0717 12:53:47.526761   40621 start.go:880] validating driver "docker" against &{Name:functional-625000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:functional-625000 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mo
untMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 12:53:47.526891   40621 start.go:891] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 12:53:47.551663   40621 out.go:177] 
	W0717 12:53:47.572553   40621 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0717 12:53:47.593587   40621 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 status
functional_test.go:856: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.14s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (32.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [def5ff58-1903-4521-85ec-e7248a4fda48] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.011163753s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-625000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-625000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-625000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-625000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [16089c2f-3e16-4ca5-8cb0-f0e5b70a1d22] Pending
helpers_test.go:344: "sp-pod" [16089c2f-3e16-4ca5-8cb0-f0e5b70a1d22] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [16089c2f-3e16-4ca5-8cb0-f0e5b70a1d22] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 18.010563375s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-625000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-625000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-625000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [feeec7fc-3881-4000-990f-e41fc3499834] Pending
helpers_test.go:344: "sp-pod" [feeec7fc-3881-4000-990f-e41fc3499834] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [feeec7fc-3881-4000-990f-e41fc3499834] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.010168783s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-625000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (32.55s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.80s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 ssh -n functional-625000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 cp functional-625000:/home/docker/cp-test.txt /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelCpCmd2877877044/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 ssh -n functional-625000 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.61s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (39.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-625000 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-7db894d786-pwjk8" [fb589d00-dd4f-4f58-bfde-26b8f14121b9] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
E0717 12:52:38.611619   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/addons-702000/client.crt: no such file or directory
helpers_test.go:344: "mysql-7db894d786-pwjk8" [fb589d00-dd4f-4f58-bfde-26b8f14121b9] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 33.066356315s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-625000 exec mysql-7db894d786-pwjk8 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-625000 exec mysql-7db894d786-pwjk8 -- mysql -ppassword -e "show databases;": exit status 1 (115.171741ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-625000 exec mysql-7db894d786-pwjk8 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-625000 exec mysql-7db894d786-pwjk8 -- mysql -ppassword -e "show databases;": exit status 1 (164.764129ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-625000 exec mysql-7db894d786-pwjk8 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-625000 exec mysql-7db894d786-pwjk8 -- mysql -ppassword -e "show databases;": exit status 1 (110.45ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0717 12:53:14.453042   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/addons-702000/client.crt: no such file or directory
functional_test.go:1803: (dbg) Run:  kubectl --context functional-625000 exec mysql-7db894d786-pwjk8 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (39.71s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/38325/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 ssh "sudo cat /etc/test/nested/copy/38325/hosts"
E0717 12:52:36.051441   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/addons-702000/client.crt: no such file or directory
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/38325.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 ssh "sudo cat /etc/ssl/certs/38325.pem"
E0717 12:52:33.492668   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/addons-702000/client.crt: no such file or directory
E0717 12:52:33.499216   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/addons-702000/client.crt: no such file or directory
E0717 12:52:33.509365   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/addons-702000/client.crt: no such file or directory
E0717 12:52:33.529495   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/addons-702000/client.crt: no such file or directory
E0717 12:52:33.569711   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/addons-702000/client.crt: no such file or directory
E0717 12:52:33.649827   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/addons-702000/client.crt: no such file or directory
E0717 12:52:33.810066   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/addons-702000/client.crt: no such file or directory
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/38325.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 ssh "sudo cat /usr/share/ca-certificates/38325.pem"
E0717 12:52:34.130472   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/addons-702000/client.crt: no such file or directory
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/383252.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 ssh "sudo cat /etc/ssl/certs/383252.pem"
E0717 12:52:34.771221   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/addons-702000/client.crt: no such file or directory
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/383252.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 ssh "sudo cat /usr/share/ca-certificates/383252.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.33s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-625000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-625000 ssh "sudo systemctl is-active crio": exit status 1 (343.331742ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-darwin-amd64 license
--- PASS: TestFunctional/parallel/License (0.78s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.77s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-625000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.27.3
registry.k8s.io/kube-proxy:v1.27.3
registry.k8s.io/kube-controller-manager:v1.27.3
registry.k8s.io/kube-apiserver:v1.27.3
registry.k8s.io/etcd:3.5.7-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-625000
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-625000
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
docker.io/kindest/kindnetd:v20230511-dc714da8
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-625000 image ls --format short --alsologtostderr:
I0717 12:54:07.149355   40772 out.go:296] Setting OutFile to fd 1 ...
I0717 12:54:07.149539   40772 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0717 12:54:07.149545   40772 out.go:309] Setting ErrFile to fd 2...
I0717 12:54:07.149550   40772 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0717 12:54:07.149730   40772 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/16890-37879/.minikube/bin
I0717 12:54:07.150324   40772 config.go:182] Loaded profile config "functional-625000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.27.3
I0717 12:54:07.150414   40772 config.go:182] Loaded profile config "functional-625000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.27.3
I0717 12:54:07.150791   40772 cli_runner.go:164] Run: docker container inspect functional-625000 --format={{.State.Status}}
I0717 12:54:07.200163   40772 ssh_runner.go:195] Run: systemctl --version
I0717 12:54:07.200229   40772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-625000
I0717 12:54:07.249340   40772 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55465 SSHKeyPath:/Users/jenkins/minikube-integration/16890-37879/.minikube/machines/functional-625000/id_rsa Username:docker}
I0717 12:54:07.339528   40772 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-625000 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| docker.io/kubernetesui/metrics-scraper      | <none>             | 115053965e86b | 43.8MB |
| docker.io/localhost/my-image                | functional-625000  | cbe4e0cf15503 | 1.24MB |
| docker.io/library/minikube-local-cache-test | functional-625000  | 16331d0d5d61c | 30B    |
| registry.k8s.io/kube-proxy                  | v1.27.3            | 5780543258cf0 | 71.1MB |
| docker.io/kindest/kindnetd                  | v20230511-dc714da8 | b0b1fa0f58c6e | 63.6MB |
| registry.k8s.io/coredns/coredns             | v1.10.1            | ead0a4a53df89 | 53.6MB |
| registry.k8s.io/etcd                        | 3.5.7-0            | 86b6af7dd652c | 296MB  |
| docker.io/kubernetesui/dashboard            | <none>             | 07655ddf2eebe | 246MB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | 6e38f40d628db | 31.5MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | 56cc512116c8f | 4.4MB  |
| registry.k8s.io/pause                       | 3.1                | da86e6ba6ca19 | 742kB  |
| gcr.io/google-containers/addon-resizer      | functional-625000  | ffd4cfbbe753e | 32.9MB |
| registry.k8s.io/pause                       | 3.3                | 0184c1613d929 | 683kB  |
| registry.k8s.io/echoserver                  | 1.8                | 82e4c8a736a4f | 95.4MB |
| registry.k8s.io/pause                       | latest             | 350b164e7ae1d | 240kB  |
| docker.io/library/nginx                     | latest             | 021283c8eb95b | 187MB  |
| registry.k8s.io/kube-apiserver              | v1.27.3            | 08a0c939e61b7 | 121MB  |
| registry.k8s.io/kube-scheduler              | v1.27.3            | 41697ceeb70b3 | 58.4MB |
| docker.io/library/mysql                     | 5.7                | 2be84dd575ee2 | 569MB  |
| docker.io/library/nginx                     | alpine             | 4937520ae206c | 41.4MB |
| registry.k8s.io/kube-controller-manager     | v1.27.3            | 7cffc01dba0e1 | 112MB  |
| registry.k8s.io/pause                       | 3.9                | e6f1816883972 | 744kB  |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-625000 image ls --format table --alsologtostderr:
I0717 12:54:10.942674   40827 out.go:296] Setting OutFile to fd 1 ...
I0717 12:54:10.942871   40827 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0717 12:54:10.942877   40827 out.go:309] Setting ErrFile to fd 2...
I0717 12:54:10.942881   40827 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0717 12:54:10.943064   40827 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/16890-37879/.minikube/bin
I0717 12:54:10.943720   40827 config.go:182] Loaded profile config "functional-625000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.27.3
I0717 12:54:10.943812   40827 config.go:182] Loaded profile config "functional-625000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.27.3
I0717 12:54:10.944189   40827 cli_runner.go:164] Run: docker container inspect functional-625000 --format={{.State.Status}}
I0717 12:54:10.993517   40827 ssh_runner.go:195] Run: systemctl --version
I0717 12:54:10.993588   40827 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-625000
I0717 12:54:11.042919   40827 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55465 SSHKeyPath:/Users/jenkins/minikube-integration/16890-37879/.minikube/machines/functional-625000/id_rsa Username:docker}
I0717 12:54:11.133632   40827 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-625000 image ls --format json --alsologtostderr:
[{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.27.3"],"size":"71100000"},{"id":"41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.27.3"],"size":"58400000"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"744000"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"246000000"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"43800000"},{"id":"6e38f40d628db3002f5617342c8872c9
35de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"16331d0d5d61c06348aebaf0d7851e7545f85d377a02306989f914e6d8012f43","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-625000"],"size":"30"},{"id":"021283c8eb95be02b23db0de7f609d603553c6714785e7a673c6594a624ffbda","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"187000000"},{"id":"4937520ae206c8969734d9a659fc1e6594d9b22b9340bf0796defbea0c92dd02","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"41400000"},{"id":"2be84dd575ee2ecdb186dc43a9cd951890a764d2cefbd31a72cdf4410c43a2d0","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"569000000"},{"id":"86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681","repoDigests":[
],"repoTags":["registry.k8s.io/etcd:3.5.7-0"],"size":"296000000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"cbe4e0cf15503ff13acb0f48a026da17774ee27b1b3d5ffc807ac72ffcf75e00","repoDigests":[],"repoTags":["docker.io/localhost/my-image:functional-625000"],"size":"1240000"},{"id":"b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da","repoDigests":[],"repoTags":["docker.io/kindest/kindnetd:v20230511-dc714da8"],"size":"63600000"},{"id":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"53600000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.27.3"],"size":
"121000000"},{"id":"7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.27.3"],"size":"112000000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-625000"],"size":"32900000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-625000 image ls --format json --alsologtostderr:
I0717 12:54:10.661443   40818 out.go:296] Setting OutFile to fd 1 ...
I0717 12:54:10.661643   40818 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0717 12:54:10.661648   40818 out.go:309] Setting ErrFile to fd 2...
I0717 12:54:10.661653   40818 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0717 12:54:10.661859   40818 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/16890-37879/.minikube/bin
I0717 12:54:10.663562   40818 config.go:182] Loaded profile config "functional-625000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.27.3
I0717 12:54:10.663695   40818 config.go:182] Loaded profile config "functional-625000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.27.3
I0717 12:54:10.664088   40818 cli_runner.go:164] Run: docker container inspect functional-625000 --format={{.State.Status}}
I0717 12:54:10.719307   40818 ssh_runner.go:195] Run: systemctl --version
I0717 12:54:10.719380   40818 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-625000
I0717 12:54:10.768984   40818 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55465 SSHKeyPath:/Users/jenkins/minikube-integration/16890-37879/.minikube/machines/functional-625000/id_rsa Username:docker}
I0717 12:54:10.856271   40818 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-625000 image ls --format yaml --alsologtostderr:
- id: 021283c8eb95be02b23db0de7f609d603553c6714785e7a673c6594a624ffbda
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "187000000"
- id: 5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.27.3
size: "71100000"
- id: 7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.27.3
size: "112000000"
- id: 41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.27.3
size: "58400000"
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53600000"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "43800000"
- id: 08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.27.3
size: "121000000"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-625000
size: "32900000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 16331d0d5d61c06348aebaf0d7851e7545f85d377a02306989f914e6d8012f43
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-625000
size: "30"
- id: 2be84dd575ee2ecdb186dc43a9cd951890a764d2cefbd31a72cdf4410c43a2d0
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "569000000"
- id: 86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.7-0
size: "296000000"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "246000000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: 4937520ae206c8969734d9a659fc1e6594d9b22b9340bf0796defbea0c92dd02
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "41400000"
- id: b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da
repoDigests: []
repoTags:
- docker.io/kindest/kindnetd:v20230511-dc714da8
size: "63600000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-625000 image ls --format yaml --alsologtostderr:
I0717 12:54:07.427211   40778 out.go:296] Setting OutFile to fd 1 ...
I0717 12:54:07.427392   40778 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0717 12:54:07.427397   40778 out.go:309] Setting ErrFile to fd 2...
I0717 12:54:07.427401   40778 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0717 12:54:07.427585   40778 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/16890-37879/.minikube/bin
I0717 12:54:07.428170   40778 config.go:182] Loaded profile config "functional-625000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.27.3
I0717 12:54:07.428262   40778 config.go:182] Loaded profile config "functional-625000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.27.3
I0717 12:54:07.428629   40778 cli_runner.go:164] Run: docker container inspect functional-625000 --format={{.State.Status}}
I0717 12:54:07.477821   40778 ssh_runner.go:195] Run: systemctl --version
I0717 12:54:07.477899   40778 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-625000
I0717 12:54:07.528148   40778 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55465 SSHKeyPath:/Users/jenkins/minikube-integration/16890-37879/.minikube/machines/functional-625000/id_rsa Username:docker}
I0717 12:54:07.617724   40778 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-625000 ssh pgrep buildkitd: exit status 1 (336.179576ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 image build -t localhost/my-image:functional-625000 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-darwin-amd64 -p functional-625000 image build -t localhost/my-image:functional-625000 testdata/build --alsologtostderr: (2.336283777s)
functional_test.go:322: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-625000 image build -t localhost/my-image:functional-625000 testdata/build --alsologtostderr:
I0717 12:54:08.042420   40794 out.go:296] Setting OutFile to fd 1 ...
I0717 12:54:08.042941   40794 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0717 12:54:08.042948   40794 out.go:309] Setting ErrFile to fd 2...
I0717 12:54:08.042952   40794 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0717 12:54:08.043148   40794 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/16890-37879/.minikube/bin
I0717 12:54:08.043714   40794 config.go:182] Loaded profile config "functional-625000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.27.3
I0717 12:54:08.044321   40794 config.go:182] Loaded profile config "functional-625000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.27.3
I0717 12:54:08.044724   40794 cli_runner.go:164] Run: docker container inspect functional-625000 --format={{.State.Status}}
I0717 12:54:08.093977   40794 ssh_runner.go:195] Run: systemctl --version
I0717 12:54:08.094043   40794 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-625000
I0717 12:54:08.143721   40794 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55465 SSHKeyPath:/Users/jenkins/minikube-integration/16890-37879/.minikube/machines/functional-625000/id_rsa Username:docker}
I0717 12:54:08.233092   40794 build_images.go:151] Building image from path: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/build.1784507968.tar
I0717 12:54:08.233181   40794 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0717 12:54:08.242111   40794 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1784507968.tar
I0717 12:54:08.246800   40794 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1784507968.tar: stat -c "%s %y" /var/lib/minikube/build/build.1784507968.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1784507968.tar': No such file or directory
I0717 12:54:08.246857   40794 ssh_runner.go:362] scp /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/build.1784507968.tar --> /var/lib/minikube/build/build.1784507968.tar (3072 bytes)
I0717 12:54:08.268642   40794 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1784507968
I0717 12:54:08.277355   40794 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1784507968 -xf /var/lib/minikube/build/build.1784507968.tar
I0717 12:54:08.286221   40794 docker.go:339] Building image: /var/lib/minikube/build/build.1784507968
I0717 12:54:08.286291   40794 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-625000 /var/lib/minikube/build/build.1784507968
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load .dockerignore
#2 transferring context: 2B done
#2 DONE 0.0s

                                                
                                                
#3 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#3 DONE 0.9s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.2s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.5s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa done
#5 DONE 0.7s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.2s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:cbe4e0cf15503ff13acb0f48a026da17774ee27b1b3d5ffc807ac72ffcf75e00 done
#8 naming to localhost/my-image:functional-625000 done
#8 DONE 0.0s
I0717 12:54:10.276769   40794 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-625000 /var/lib/minikube/build/build.1784507968: (1.990443889s)
I0717 12:54:10.276836   40794 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1784507968
I0717 12:54:10.286812   40794 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1784507968.tar
I0717 12:54:10.296748   40794 build_images.go:207] Built localhost/my-image:functional-625000 from /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/build.1784507968.tar
I0717 12:54:10.296772   40794 build_images.go:123] succeeded building to: functional-625000
I0717 12:54:10.296777   40794 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.95s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (3.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (3.344151808s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-625000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (3.40s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (2.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-625000 docker-env) && out/minikube-darwin-amd64 status -p functional-625000"
functional_test.go:495: (dbg) Done: /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-625000 docker-env) && out/minikube-darwin-amd64 status -p functional-625000": (1.605262504s)
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-625000 docker-env) && docker images"
functional_test.go:518: (dbg) Done: /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-625000 docker-env) && docker images": (1.045094556s)
--- PASS: TestFunctional/parallel/DockerEnv/bash (2.65s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 image load --daemon gcr.io/google-containers/addon-resizer:functional-625000 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-darwin-amd64 -p functional-625000 image load --daemon gcr.io/google-containers/addon-resizer:functional-625000 --alsologtostderr: (4.06247608s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 image load --daemon gcr.io/google-containers/addon-resizer:functional-625000 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-darwin-amd64 -p functional-625000 image load --daemon gcr.io/google-containers/addon-resizer:functional-625000 --alsologtostderr: (2.297508204s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.68s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (7.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (2.897768772s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-625000
functional_test.go:244: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 image load --daemon gcr.io/google-containers/addon-resizer:functional-625000 --alsologtostderr
E0717 12:52:43.731799   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/addons-702000/client.crt: no such file or directory
functional_test.go:244: (dbg) Done: out/minikube-darwin-amd64 -p functional-625000 image load --daemon gcr.io/google-containers/addon-resizer:functional-625000 --alsologtostderr: (4.342296259s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (7.60s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 image save gcr.io/google-containers/addon-resizer:functional-625000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-darwin-amd64 -p functional-625000 image save gcr.io/google-containers/addon-resizer:functional-625000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr: (1.636331366s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.64s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 image rm gcr.io/google-containers/addon-resizer:functional-625000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.76s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-darwin-amd64 -p functional-625000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr: (2.47992196s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.80s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-625000
functional_test.go:423: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 image save --daemon gcr.io/google-containers/addon-resizer:functional-625000 --alsologtostderr
E0717 12:52:53.972151   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/addons-702000/client.crt: no such file or directory
functional_test.go:423: (dbg) Done: out/minikube-darwin-amd64 -p functional-625000 image save --daemon gcr.io/google-containers/addon-resizer:functional-625000 --alsologtostderr: (2.767527376s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-625000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.88s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-625000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-625000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-625000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-625000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 40258: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-625000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (14.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-625000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [a3f8594a-89f0-4b1f-8355-fd1ba2c58169] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [a3f8594a-89f0-4b1f-8355-fd1ba2c58169] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 14.010366411s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (14.28s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-625000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://127.0.0.1 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-amd64 -p functional-625000 tunnel --alsologtostderr] ...
helpers_test.go:502: unable to terminate pid 40288: operation not permitted
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-625000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdany-port2674226633/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1689623595868180000" to /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdany-port2674226633/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1689623595868180000" to /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdany-port2674226633/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1689623595868180000" to /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdany-port2674226633/001/test-1689623595868180000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-625000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (354.380341ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jul 17 19:53 created-by-test
-rw-r--r-- 1 docker docker 24 Jul 17 19:53 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jul 17 19:53 test-1689623595868180000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 ssh cat /mount-9p/test-1689623595868180000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-625000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [e3b1e847-2ed6-4ffa-bf64-30448bc00cf1] Pending
helpers_test.go:344: "busybox-mount" [e3b1e847-2ed6-4ffa-bf64-30448bc00cf1] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [e3b1e847-2ed6-4ffa-bf64-30448bc00cf1] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [e3b1e847-2ed6-4ffa-bf64-30448bc00cf1] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.024131168s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-625000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-625000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdany-port2674226633/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.73s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-625000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdspecific-port4099321940/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-625000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (432.781536ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-625000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdspecific-port4099321940/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-625000 ssh "sudo umount -f /mount-9p": exit status 1 (356.186187ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-amd64 -p functional-625000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-625000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdspecific-port4099321940/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.46s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-625000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3680362210/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-625000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3680362210/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-625000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3680362210/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-625000 ssh "findmnt -T" /mount1: exit status 1 (556.685911ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Done: out/minikube-darwin-amd64 -p functional-625000 ssh "findmnt -T" /mount1: (1.263876065s)
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-amd64 mount -p functional-625000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-625000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3680362210/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-625000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3680362210/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-625000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3680362210/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.96s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (8.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1438: (dbg) Run:  kubectl --context functional-625000 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-625000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-775766b4cc-ww6mh" [4a34a252-d141-4272-90e3-e2bde17b0d61] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-775766b4cc-ww6mh" [4a34a252-d141-4272-90e3-e2bde17b0d61] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 8.007021543s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (8.12s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-darwin-amd64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-darwin-amd64 profile list
functional_test.go:1314: Took "377.242768ms" to run "out/minikube-darwin-amd64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-darwin-amd64 profile list -l
functional_test.go:1328: Took "64.057553ms" to run "out/minikube-darwin-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json
functional_test.go:1365: Took "378.152802ms" to run "out/minikube-darwin-amd64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json --light
functional_test.go:1378: Took "65.654856ms" to run "out/minikube-darwin-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 service list
functional_test.go:1458: (dbg) Done: out/minikube-darwin-amd64 -p functional-625000 service list: (1.853692748s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.85s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 service list -o json
functional_test.go:1488: (dbg) Done: out/minikube-darwin-amd64 -p functional-625000 service list -o json: (1.808121046s)
functional_test.go:1493: Took "1.808253718s" to run "out/minikube-darwin-amd64 -p functional-625000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.81s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 service --namespace=default --https --url hello-node
E0717 12:53:55.413771   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/addons-702000/client.crt: no such file or directory
2023/07/17 12:54:06 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:1508: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-625000 service --namespace=default --https --url hello-node: signal: killed (15.00246063s)

                                                
                                                
-- stdout --
	https://127.0.0.1:55908

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on darwin, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
functional_test.go:1521: found endpoint: https://127.0.0.1:55908
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (15.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 service hello-node --url --format={{.IP}}
functional_test.go:1539: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-625000 service hello-node --url --format={{.IP}}: signal: killed (15.001669681s)

                                                
                                                
-- stdout --
	127.0.0.1

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on darwin, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ServiceCmd/Format (15.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-darwin-amd64 -p functional-625000 service hello-node --url
functional_test.go:1558: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-625000 service hello-node --url: signal: killed (15.002389345s)

                                                
                                                
-- stdout --
	http://127.0.0.1:55933

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on darwin, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
functional_test.go:1564: found endpoint for hello-node: http://127.0.0.1:55933
--- PASS: TestFunctional/parallel/ServiceCmd/URL (15.00s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.14s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-625000
--- PASS: TestFunctional/delete_addon-resizer_images (0.14s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.06s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-625000
--- PASS: TestFunctional/delete_my-image_image (0.06s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-625000
--- PASS: TestFunctional/delete_minikube_cached_images (0.05s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (21.24s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-amd64 start -p image-736000 --driver=docker 
image_test.go:69: (dbg) Done: out/minikube-darwin-amd64 start -p image-736000 --driver=docker : (21.239638857s)
--- PASS: TestImageBuild/serial/Setup (21.24s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (2.29s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-736000
image_test.go:78: (dbg) Done: out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-736000: (2.288418655s)
--- PASS: TestImageBuild/serial/NormalBuild (2.29s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.83s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-736000
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.83s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.66s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-736000
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.66s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.64s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-736000
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.64s)

                                                
                                    
x
+
TestJSONOutput/start/Command (49.32s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-323000 --output=json --user=testUser --memory=2200 --wait=true --driver=docker 
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 start -p json-output-323000 --output=json --user=testUser --memory=2200 --wait=true --driver=docker : (49.319896196s)
--- PASS: TestJSONOutput/start/Command (49.32s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.56s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 pause -p json-output-323000 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.56s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.58s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 unpause -p json-output-323000 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.58s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.74s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 stop -p json-output-323000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 stop -p json-output-323000 --output=json --user=testUser: (5.742143452s)
--- PASS: TestJSONOutput/stop/Command (5.74s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.69s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-error-573000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p json-output-error-573000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (337.532081ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"3650d2bc-c09c-475e-923c-05fa1676ecb0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-573000] minikube v1.30.1 on Darwin 13.4.1","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"398c4659-8c5e-4875-b3f4-8c9dd824c93c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=16890"}}
	{"specversion":"1.0","id":"01fcb559-b692-484f-92ce-5ba602005739","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/16890-37879/kubeconfig"}}
	{"specversion":"1.0","id":"f4850e4e-443d-4eae-a93d-f9da48d4fab9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"d676f22d-ad88-48ce-b613-e722d921b697","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"fd9b9a67-ed2a-49f8-bd77-7ace38a8f3a7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/16890-37879/.minikube"}}
	{"specversion":"1.0","id":"651874bf-80a1-49c9-ac2f-47676d4e0569","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"d0bb95f3-1e77-4d5b-8b71-650d308ce9c6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-573000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p json-output-error-573000
--- PASS: TestErrorJSONOutput (0.69s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (24.1s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-531000 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-531000 --network=: (21.620859745s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-531000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-531000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-531000: (2.42991486s)
--- PASS: TestKicCustomNetwork/create_custom_network (24.10s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (23.72s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-687000 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-687000 --network=bridge: (21.319736009s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-687000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-687000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-687000: (2.348415272s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (23.72s)

                                                
                                    
x
+
TestKicExistingNetwork (23.98s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-darwin-amd64 start -p existing-network-157000 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-darwin-amd64 start -p existing-network-157000 --network=existing-network: (21.387254669s)
helpers_test.go:175: Cleaning up "existing-network-157000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p existing-network-157000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p existing-network-157000: (2.262466926s)
--- PASS: TestKicExistingNetwork (23.98s)

                                                
                                    
x
+
TestKicCustomSubnet (23.44s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p custom-subnet-456000 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p custom-subnet-456000 --subnet=192.168.60.0/24: (20.986344301s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-456000 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-456000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p custom-subnet-456000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p custom-subnet-456000: (2.400816889s)
--- PASS: TestKicCustomSubnet (23.44s)

                                                
                                    
x
+
TestKicStaticIP (24.22s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 start -p static-ip-408000 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-darwin-amd64 start -p static-ip-408000 --static-ip=192.168.200.200: (21.624509411s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-darwin-amd64 -p static-ip-408000 ip
helpers_test.go:175: Cleaning up "static-ip-408000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p static-ip-408000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p static-ip-408000: (2.382643688s)
--- PASS: TestKicStaticIP (24.22s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (50s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p first-569000 --driver=docker 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p first-569000 --driver=docker : (21.677650965s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p second-571000 --driver=docker 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p second-571000 --driver=docker : (21.828737342s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile first-569000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile second-571000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-571000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p second-571000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p second-571000: (2.442485467s)
helpers_test.go:175: Cleaning up "first-569000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p first-569000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p first-569000: (2.409604736s)
--- PASS: TestMinikubeProfile (50.00s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.75s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-1-912000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-1-912000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker : (6.743379025s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.75s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-1-912000 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.93s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-923000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-923000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker : (6.922998078s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.93s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-923000 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.36s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (2.02s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 delete -p mount-start-1-912000 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-darwin-amd64 delete -p mount-start-1-912000 --alsologtostderr -v=5: (2.019139071s)
--- PASS: TestMountStart/serial/DeleteFirst (2.02s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-923000 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.36s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.53s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-darwin-amd64 stop -p mount-start-2-923000
mount_start_test.go:155: (dbg) Done: out/minikube-darwin-amd64 stop -p mount-start-2-923000: (1.528799597s)
--- PASS: TestMountStart/serial/Stop (1.53s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (9.06s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-923000
mount_start_test.go:166: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-923000: (8.060347234s)
--- PASS: TestMountStart/serial/RestartStopped (9.06s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-923000 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (63.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-170000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker 
E0717 13:07:33.535008   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/addons-702000/client.crt: no such file or directory
E0717 13:07:36.398749   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/functional-625000/client.crt: no such file or directory
multinode_test.go:85: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-170000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker : (1m2.822157187s)
multinode_test.go:91: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-170000 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (63.50s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (46.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-170000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:486: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-170000 -- rollout status deployment/busybox
multinode_test.go:486: (dbg) Done: out/minikube-darwin-amd64 kubectl -p multinode-170000 -- rollout status deployment/busybox: (3.634633679s)
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-170000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:502: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-170000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:502: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-170000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:502: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-170000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:502: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-170000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:502: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-170000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:502: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
E0717 13:08:56.580568   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/addons-702000/client.crt: no such file or directory
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-170000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:502: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-170000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:516: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-170000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:524: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-170000 -- exec busybox-67b7f59bb-m2gdl -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-170000 -- exec busybox-67b7f59bb-sgdxr -- nslookup kubernetes.io
multinode_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-170000 -- exec busybox-67b7f59bb-m2gdl -- nslookup kubernetes.default
multinode_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-170000 -- exec busybox-67b7f59bb-sgdxr -- nslookup kubernetes.default
multinode_test.go:542: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-170000 -- exec busybox-67b7f59bb-m2gdl -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-170000 -- exec busybox-67b7f59bb-sgdxr -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (46.76s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-170000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:560: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-170000 -- exec busybox-67b7f59bb-m2gdl -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-170000 -- exec busybox-67b7f59bb-m2gdl -- sh -c "ping -c 1 192.168.65.254"
multinode_test.go:560: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-170000 -- exec busybox-67b7f59bb-sgdxr -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-170000 -- exec busybox-67b7f59bb-sgdxr -- sh -c "ping -c 1 192.168.65.254"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.85s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (15.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-170000 -v 3 --alsologtostderr
multinode_test.go:110: (dbg) Done: out/minikube-darwin-amd64 node add -p multinode-170000 -v 3 --alsologtostderr: (14.450902443s)
multinode_test.go:116: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-170000 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (15.39s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.39s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (13.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-170000 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-170000 cp testdata/cp-test.txt multinode-170000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-170000 ssh -n multinode-170000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-170000 cp multinode-170000:/home/docker/cp-test.txt /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiNodeserialCopyFile201151713/001/cp-test_multinode-170000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-170000 ssh -n multinode-170000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-170000 cp multinode-170000:/home/docker/cp-test.txt multinode-170000-m02:/home/docker/cp-test_multinode-170000_multinode-170000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-170000 ssh -n multinode-170000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-170000 ssh -n multinode-170000-m02 "sudo cat /home/docker/cp-test_multinode-170000_multinode-170000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-170000 cp multinode-170000:/home/docker/cp-test.txt multinode-170000-m03:/home/docker/cp-test_multinode-170000_multinode-170000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-170000 ssh -n multinode-170000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-170000 ssh -n multinode-170000-m03 "sudo cat /home/docker/cp-test_multinode-170000_multinode-170000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-170000 cp testdata/cp-test.txt multinode-170000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-170000 ssh -n multinode-170000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-170000 cp multinode-170000-m02:/home/docker/cp-test.txt /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiNodeserialCopyFile201151713/001/cp-test_multinode-170000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-170000 ssh -n multinode-170000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-170000 cp multinode-170000-m02:/home/docker/cp-test.txt multinode-170000:/home/docker/cp-test_multinode-170000-m02_multinode-170000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-170000 ssh -n multinode-170000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-170000 ssh -n multinode-170000 "sudo cat /home/docker/cp-test_multinode-170000-m02_multinode-170000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-170000 cp multinode-170000-m02:/home/docker/cp-test.txt multinode-170000-m03:/home/docker/cp-test_multinode-170000-m02_multinode-170000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-170000 ssh -n multinode-170000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-170000 ssh -n multinode-170000-m03 "sudo cat /home/docker/cp-test_multinode-170000-m02_multinode-170000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-170000 cp testdata/cp-test.txt multinode-170000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-170000 ssh -n multinode-170000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-170000 cp multinode-170000-m03:/home/docker/cp-test.txt /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiNodeserialCopyFile201151713/001/cp-test_multinode-170000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-170000 ssh -n multinode-170000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-170000 cp multinode-170000-m03:/home/docker/cp-test.txt multinode-170000:/home/docker/cp-test_multinode-170000-m03_multinode-170000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-170000 ssh -n multinode-170000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-170000 ssh -n multinode-170000 "sudo cat /home/docker/cp-test_multinode-170000-m03_multinode-170000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-170000 cp multinode-170000-m03:/home/docker/cp-test.txt multinode-170000-m02:/home/docker/cp-test_multinode-170000-m03_multinode-170000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-170000 ssh -n multinode-170000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-170000 ssh -n multinode-170000-m02 "sudo cat /home/docker/cp-test_multinode-170000-m03_multinode-170000-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (13.08s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-170000 node stop m03
multinode_test.go:210: (dbg) Done: out/minikube-darwin-amd64 -p multinode-170000 node stop m03: (1.469431302s)
multinode_test.go:216: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-170000 status
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-170000 status: exit status 7 (677.341946ms)

                                                
                                                
-- stdout --
	multinode-170000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-170000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-170000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:223: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-170000 status --alsologtostderr
multinode_test.go:223: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-170000 status --alsologtostderr: exit status 7 (677.438377ms)

                                                
                                                
-- stdout --
	multinode-170000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-170000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-170000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 13:09:49.924939   43957 out.go:296] Setting OutFile to fd 1 ...
	I0717 13:09:49.925111   43957 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 13:09:49.925116   43957 out.go:309] Setting ErrFile to fd 2...
	I0717 13:09:49.925122   43957 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 13:09:49.925298   43957 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/16890-37879/.minikube/bin
	I0717 13:09:49.925476   43957 out.go:303] Setting JSON to false
	I0717 13:09:49.925498   43957 mustload.go:65] Loading cluster: multinode-170000
	I0717 13:09:49.925529   43957 notify.go:220] Checking for updates...
	I0717 13:09:49.925790   43957 config.go:182] Loaded profile config "multinode-170000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0717 13:09:49.925805   43957 status.go:255] checking status of multinode-170000 ...
	I0717 13:09:49.926222   43957 cli_runner.go:164] Run: docker container inspect multinode-170000 --format={{.State.Status}}
	I0717 13:09:49.976725   43957 status.go:330] multinode-170000 host status = "Running" (err=<nil>)
	I0717 13:09:49.976775   43957 host.go:66] Checking if "multinode-170000" exists ...
	I0717 13:09:49.977075   43957 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-170000
	I0717 13:09:50.027603   43957 host.go:66] Checking if "multinode-170000" exists ...
	I0717 13:09:50.027864   43957 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 13:09:50.027928   43957 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-170000
	I0717 13:09:50.077247   43957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56338 SSHKeyPath:/Users/jenkins/minikube-integration/16890-37879/.minikube/machines/multinode-170000/id_rsa Username:docker}
	I0717 13:09:50.165558   43957 ssh_runner.go:195] Run: systemctl --version
	I0717 13:09:50.170971   43957 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 13:09:50.181810   43957 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-170000
	I0717 13:09:50.231411   43957 kubeconfig.go:92] found "multinode-170000" server: "https://127.0.0.1:56337"
	I0717 13:09:50.231435   43957 api_server.go:166] Checking apiserver status ...
	I0717 13:09:50.231485   43957 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 13:09:50.242914   43957 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2150/cgroup
	W0717 13:09:50.252076   43957 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2150/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0717 13:09:50.252133   43957 ssh_runner.go:195] Run: ls
	I0717 13:09:50.256669   43957 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:56337/healthz ...
	I0717 13:09:50.262418   43957 api_server.go:279] https://127.0.0.1:56337/healthz returned 200:
	ok
	I0717 13:09:50.262430   43957 status.go:421] multinode-170000 apiserver status = Running (err=<nil>)
	I0717 13:09:50.262441   43957 status.go:257] multinode-170000 status: &{Name:multinode-170000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0717 13:09:50.262452   43957 status.go:255] checking status of multinode-170000-m02 ...
	I0717 13:09:50.262684   43957 cli_runner.go:164] Run: docker container inspect multinode-170000-m02 --format={{.State.Status}}
	I0717 13:09:50.312326   43957 status.go:330] multinode-170000-m02 host status = "Running" (err=<nil>)
	I0717 13:09:50.312350   43957 host.go:66] Checking if "multinode-170000-m02" exists ...
	I0717 13:09:50.312625   43957 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-170000-m02
	I0717 13:09:50.361803   43957 host.go:66] Checking if "multinode-170000-m02" exists ...
	I0717 13:09:50.362064   43957 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 13:09:50.362116   43957 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-170000-m02
	I0717 13:09:50.411790   43957 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56375 SSHKeyPath:/Users/jenkins/minikube-integration/16890-37879/.minikube/machines/multinode-170000-m02/id_rsa Username:docker}
	I0717 13:09:50.500635   43957 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 13:09:50.511214   43957 status.go:257] multinode-170000-m02 status: &{Name:multinode-170000-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0717 13:09:50.511230   43957 status.go:255] checking status of multinode-170000-m03 ...
	I0717 13:09:50.511482   43957 cli_runner.go:164] Run: docker container inspect multinode-170000-m03 --format={{.State.Status}}
	I0717 13:09:50.560531   43957 status.go:330] multinode-170000-m03 host status = "Stopped" (err=<nil>)
	I0717 13:09:50.560560   43957 status.go:343] host is not running, skipping remaining checks
	I0717 13:09:50.560570   43957 status.go:257] multinode-170000-m03 status: &{Name:multinode-170000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.82s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (12.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:244: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-170000 node start m03 --alsologtostderr
multinode_test.go:254: (dbg) Done: out/minikube-darwin-amd64 -p multinode-170000 node start m03 --alsologtostderr: (11.984007654s)
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-170000 status
multinode_test.go:275: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (12.97s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (97.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-170000
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 stop -p multinode-170000
multinode_test.go:290: (dbg) Done: out/minikube-darwin-amd64 stop -p multinode-170000: (22.823752213s)
multinode_test.go:295: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-170000 --wait=true -v=8 --alsologtostderr
multinode_test.go:295: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-170000 --wait=true -v=8 --alsologtostderr: (1m14.236932964s)
multinode_test.go:300: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-170000
--- PASS: TestMultiNode/serial/RestartKeepsNodes (97.15s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-170000 node delete m03
multinode_test.go:394: (dbg) Done: out/minikube-darwin-amd64 -p multinode-170000 node delete m03: (4.97064861s)
multinode_test.go:400: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-170000 status --alsologtostderr
multinode_test.go:414: (dbg) Run:  docker volume ls
multinode_test.go:424: (dbg) Run:  kubectl get nodes
multinode_test.go:432: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.76s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (21.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-170000 stop
multinode_test.go:314: (dbg) Done: out/minikube-darwin-amd64 -p multinode-170000 stop: (21.4757361s)
multinode_test.go:320: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-170000 status
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-170000 status: exit status 7 (141.403207ms)

                                                
                                                
-- stdout --
	multinode-170000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-170000-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:327: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-170000 status --alsologtostderr
multinode_test.go:327: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-170000 status --alsologtostderr: exit status 7 (141.35053ms)

                                                
                                                
-- stdout --
	multinode-170000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-170000-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 13:12:08.095407   44411 out.go:296] Setting OutFile to fd 1 ...
	I0717 13:12:08.095596   44411 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 13:12:08.095601   44411 out.go:309] Setting ErrFile to fd 2...
	I0717 13:12:08.095605   44411 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 13:12:08.095797   44411 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/16890-37879/.minikube/bin
	I0717 13:12:08.095985   44411 out.go:303] Setting JSON to false
	I0717 13:12:08.096009   44411 mustload.go:65] Loading cluster: multinode-170000
	I0717 13:12:08.096054   44411 notify.go:220] Checking for updates...
	I0717 13:12:08.096337   44411 config.go:182] Loaded profile config "multinode-170000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0717 13:12:08.096350   44411 status.go:255] checking status of multinode-170000 ...
	I0717 13:12:08.096753   44411 cli_runner.go:164] Run: docker container inspect multinode-170000 --format={{.State.Status}}
	I0717 13:12:08.145321   44411 status.go:330] multinode-170000 host status = "Stopped" (err=<nil>)
	I0717 13:12:08.145339   44411 status.go:343] host is not running, skipping remaining checks
	I0717 13:12:08.145346   44411 status.go:257] multinode-170000 status: &{Name:multinode-170000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0717 13:12:08.145371   44411 status.go:255] checking status of multinode-170000-m02 ...
	I0717 13:12:08.145610   44411 cli_runner.go:164] Run: docker container inspect multinode-170000-m02 --format={{.State.Status}}
	I0717 13:12:08.194973   44411 status.go:330] multinode-170000-m02 host status = "Stopped" (err=<nil>)
	I0717 13:12:08.195003   44411 status.go:343] host is not running, skipping remaining checks
	I0717 13:12:08.195012   44411 status.go:257] multinode-170000-m02 status: &{Name:multinode-170000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (21.76s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (56.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:344: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:354: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-170000 --wait=true -v=8 --alsologtostderr --driver=docker 
E0717 13:12:33.535399   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/addons-702000/client.crt: no such file or directory
E0717 13:12:36.399525   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/functional-625000/client.crt: no such file or directory
multinode_test.go:354: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-170000 --wait=true -v=8 --alsologtostderr --driver=docker : (56.072371116s)
multinode_test.go:360: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-170000 status --alsologtostderr
multinode_test.go:374: (dbg) Run:  kubectl get nodes
multinode_test.go:382: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (56.87s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (25.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:443: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-170000
multinode_test.go:452: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-170000-m02 --driver=docker 
multinode_test.go:452: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-170000-m02 --driver=docker : exit status 14 (457.823586ms)

                                                
                                                
-- stdout --
	* [multinode-170000-m02] minikube v1.30.1 on Darwin 13.4.1
	  - MINIKUBE_LOCATION=16890
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16890-37879/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16890-37879/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-170000-m02' is duplicated with machine name 'multinode-170000-m02' in profile 'multinode-170000'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:460: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-170000-m03 --driver=docker 
multinode_test.go:460: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-170000-m03 --driver=docker : (22.174769497s)
multinode_test.go:467: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-170000
multinode_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-amd64 node add -p multinode-170000: exit status 80 (457.930621ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-170000
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-170000-m03 already exists in multinode-170000-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-amd64 delete -p multinode-170000-m03
multinode_test.go:472: (dbg) Done: out/minikube-darwin-amd64 delete -p multinode-170000-m03: (2.417254701s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (25.55s)

                                                
                                    
x
+
TestPreload (157.82s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-967000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.24.4
E0717 13:13:59.458759   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/functional-625000/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-967000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.24.4: (1m30.429326596s)
preload_test.go:52: (dbg) Run:  out/minikube-darwin-amd64 -p test-preload-967000 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-darwin-amd64 -p test-preload-967000 image pull gcr.io/k8s-minikube/busybox: (2.363364079s)
preload_test.go:58: (dbg) Run:  out/minikube-darwin-amd64 stop -p test-preload-967000
preload_test.go:58: (dbg) Done: out/minikube-darwin-amd64 stop -p test-preload-967000: (10.829045651s)
preload_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-967000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker 
preload_test.go:66: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-967000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker : (51.318792993s)
preload_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 -p test-preload-967000 image list
helpers_test.go:175: Cleaning up "test-preload-967000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p test-preload-967000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p test-preload-967000: (2.599947943s)
--- PASS: TestPreload (157.82s)

                                                
                                    
x
+
TestScheduledStopUnix (95.48s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 start -p scheduled-stop-514000 --memory=2048 --driver=docker 
scheduled_stop_test.go:128: (dbg) Done: out/minikube-darwin-amd64 start -p scheduled-stop-514000 --memory=2048 --driver=docker : (21.573218164s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-514000 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.TimeToStop}} -p scheduled-stop-514000 -n scheduled-stop-514000
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-514000 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-514000 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-514000 -n scheduled-stop-514000
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 status -p scheduled-stop-514000
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-514000 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0717 13:17:33.538645   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/addons-702000/client.crt: no such file or directory
E0717 13:17:36.401387   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/functional-625000/client.crt: no such file or directory
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 status -p scheduled-stop-514000
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p scheduled-stop-514000: exit status 7 (95.80166ms)

                                                
                                                
-- stdout --
	scheduled-stop-514000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-514000 -n scheduled-stop-514000
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-514000 -n scheduled-stop-514000: exit status 7 (92.10453ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-514000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p scheduled-stop-514000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p scheduled-stop-514000: (2.18763538s)
--- PASS: TestScheduledStopUnix (95.48s)

                                                
                                    
x
+
TestSkaffold (116.69s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/skaffold.exe3820770487 version
skaffold_test.go:63: skaffold version: v2.6.0
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p skaffold-508000 --memory=2600 --driver=docker 
skaffold_test.go:66: (dbg) Done: out/minikube-darwin-amd64 start -p skaffold-508000 --memory=2600 --driver=docker : (20.825478319s)
skaffold_test.go:86: copying out/minikube-darwin-amd64 to /Users/jenkins/workspace/out/minikube
skaffold_test.go:105: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/skaffold.exe3820770487 run --minikube-profile skaffold-508000 --kube-context skaffold-508000 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:105: (dbg) Done: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/skaffold.exe3820770487 run --minikube-profile skaffold-508000 --kube-context skaffold-508000 --status-check=true --port-forward=false --interactive=false: (1m18.995490912s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-5bb84457c5-gttw9" [400aef11-5d98-4dcb-adc5-ce6315efec76] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 5.014275913s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-59b7bd655f-7gtp9" [ba14a115-448a-413a-ba9c-d427c80b32cb] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.010161213s
helpers_test.go:175: Cleaning up "skaffold-508000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p skaffold-508000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p skaffold-508000: (3.030688994s)
--- PASS: TestSkaffold (116.69s)

                                                
                                    
x
+
TestInsufficientStorage (10.69s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-darwin-amd64 start -p insufficient-storage-538000 --memory=2048 --output=json --wait=true --driver=docker 
status_test.go:50: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p insufficient-storage-538000 --memory=2048 --output=json --wait=true --driver=docker : exit status 26 (7.765114158s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"0b3dfebe-63b2-45bb-a517-0ab23e1ae9de","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-538000] minikube v1.30.1 on Darwin 13.4.1","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"bdff6e06-8cdc-450f-991b-5eb333e9920c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=16890"}}
	{"specversion":"1.0","id":"da803b98-ee9e-4b07-ad69-a61ce18465e0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/16890-37879/kubeconfig"}}
	{"specversion":"1.0","id":"9378daef-f2fe-4725-864d-25a06d14397d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"246ccf91-b0b1-4577-8e3a-6ae34c5f0bcb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"886a1f2a-26a7-432e-8f4b-8877845b64f2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/16890-37879/.minikube"}}
	{"specversion":"1.0","id":"547b26fe-0e53-4781-b2b8-8f9d25293339","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"06a3729a-843c-414a-a76b-500b9f15872d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"b49599a4-6373-4d88-bc82-dd64d79def62","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"427ab634-5f6d-49e1-831d-f30f670a554a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"b62615a7-dc19-4293-89cb-5d72f93c5d10","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker Desktop driver with root privileges"}}
	{"specversion":"1.0","id":"36cb8b01-2c40-4281-9a2f-902138e121b2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-538000 in cluster insufficient-storage-538000","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"ffd10e61-230a-4076-8d62-8fa5c2fa7423","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"1f354e25-f05c-409d-985a-779c7b8ced4b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"779563bd-979f-4eac-bfe7-681388492962","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p insufficient-storage-538000 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p insufficient-storage-538000 --output=json --layout=cluster: exit status 7 (357.663046ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-538000","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.30.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-538000","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 13:19:53.142228   46016 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-538000" does not appear in /Users/jenkins/minikube-integration/16890-37879/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p insufficient-storage-538000 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p insufficient-storage-538000 --output=json --layout=cluster: exit status 7 (351.14752ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-538000","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.30.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-538000","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 13:19:53.494216   46026 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-538000" does not appear in /Users/jenkins/minikube-integration/16890-37879/kubeconfig
	E0717 13:19:53.504496   46026 status.go:559] unable to read event log: stat: stat /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/insufficient-storage-538000/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-538000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p insufficient-storage-538000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p insufficient-storage-538000: (2.216883804s)
--- PASS: TestInsufficientStorage (10.69s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (11.71s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.30.1 on darwin
- MINIKUBE_LOCATION=16890
- KUBECONFIG=/Users/jenkins/minikube-integration/16890-37879/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current3119897536/001
* Using the hyperkit driver based on user configuration
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current3119897536/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current3119897536/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current3119897536/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Downloading VM boot image ...
* Starting control plane node minikube in cluster minikube
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (11.71s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (14.76s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.30.1 on darwin
- MINIKUBE_LOCATION=16890
- KUBECONFIG=/Users/jenkins/minikube-integration/16890-37879/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current3195602769/001
* Using the hyperkit driver based on user configuration
* Downloading driver docker-machine-driver-hyperkit:
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current3195602769/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current3195602769/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current3195602769/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Downloading VM boot image ...
* Starting control plane node minikube in cluster minikube
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (14.76s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.05s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.05s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (3.49s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:218: (dbg) Run:  out/minikube-darwin-amd64 logs -p stopped-upgrade-327000
version_upgrade_test.go:218: (dbg) Done: out/minikube-darwin-amd64 logs -p stopped-upgrade-327000: (3.490073806s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (3.49s)

                                                
                                    
x
+
TestPause/serial/Start (49.16s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-898000 --memory=2048 --install-addons=false --wait=all --driver=docker 
pause_test.go:80: (dbg) Done: out/minikube-darwin-amd64 start -p pause-898000 --memory=2048 --install-addons=false --wait=all --driver=docker : (49.161237887s)
--- PASS: TestPause/serial/Start (49.16s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (35.79s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-898000 --alsologtostderr -v=1 --driver=docker 
E0717 13:27:15.861509   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/skaffold-508000/client.crt: no such file or directory
pause_test.go:92: (dbg) Done: out/minikube-darwin-amd64 start -p pause-898000 --alsologtostderr -v=1 --driver=docker : (35.779264153s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (35.79s)

                                                
                                    
x
+
TestPause/serial/Pause (0.67s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 pause -p pause-898000 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.67s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.38s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p pause-898000 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p pause-898000 --output=json --layout=cluster: exit status 2 (375.508456ms)

                                                
                                                
-- stdout --
	{"Name":"pause-898000","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 14 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.30.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-898000","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.38s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.63s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-darwin-amd64 unpause -p pause-898000 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.63s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.7s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 pause -p pause-898000 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.70s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.44s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 delete -p pause-898000 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-darwin-amd64 delete -p pause-898000 --alsologtostderr -v=5: (2.444816146s)
--- PASS: TestPause/serial/DeletePaused (2.44s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.51s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-898000
E0717 13:27:33.574834   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/addons-702000/client.crt: no such file or directory
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-898000: exit status 1 (49.642545ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-898000: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.51s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.4s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-408000 --no-kubernetes --kubernetes-version=1.20 --driver=docker 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p NoKubernetes-408000 --no-kubernetes --kubernetes-version=1.20 --driver=docker : exit status 14 (398.289908ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-408000] minikube v1.30.1 on Darwin 13.4.1
	  - MINIKUBE_LOCATION=16890
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16890-37879/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16890-37879/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.40s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (22.06s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-408000 --driver=docker 
E0717 13:27:36.437938   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/functional-625000/client.crt: no such file or directory
no_kubernetes_test.go:95: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-408000 --driver=docker : (21.681493766s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-408000 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (22.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (8.67s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-408000 --no-kubernetes --driver=docker 
no_kubernetes_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-408000 --no-kubernetes --driver=docker : (6.104809178s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-408000 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p NoKubernetes-408000 status -o json: exit status 2 (355.717165ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-408000","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-darwin-amd64 delete -p NoKubernetes-408000
no_kubernetes_test.go:124: (dbg) Done: out/minikube-darwin-amd64 delete -p NoKubernetes-408000: (2.210547009s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (8.67s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (7.76s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-408000 --no-kubernetes --driver=docker 
no_kubernetes_test.go:136: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-408000 --no-kubernetes --driver=docker : (7.760927295s)
--- PASS: TestNoKubernetes/serial/Start (7.76s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-408000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-408000 "sudo systemctl is-active --quiet service kubelet": exit status 1 (338.696773ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.53s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-amd64 stop -p NoKubernetes-408000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-amd64 stop -p NoKubernetes-408000: (1.526020895s)
--- PASS: TestNoKubernetes/serial/Stop (1.53s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (9.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-408000 --driver=docker 
no_kubernetes_test.go:191: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-408000 --driver=docker : (9.080997844s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (9.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-408000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-408000 "sudo systemctl is-active --quiet service kubelet": exit status 1 (369.34716ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (50.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p auto-859000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p auto-859000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker : (50.271772823s)
--- PASS: TestNetworkPlugins/group/auto/Start (50.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p auto-859000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-859000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-p5rj5" [d0b8208c-5ade-4fa9-b6f4-0b4f91706a81] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-p5rj5" [d0b8208c-5ade-4fa9-b6f4-0b4f91706a81] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.009220327s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-859000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-859000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-859000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (50.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p kindnet-859000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker 
E0717 13:29:59.702223   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/skaffold-508000/client.crt: no such file or directory
E0717 13:30:39.499871   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/functional-625000/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p kindnet-859000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker : (50.257298589s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (50.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-sbcnq" [67b40203-9508-4adc-bb12-4f7a73ba26d4] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.016686695s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p kindnet-859000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-859000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-lzmms" [cbcf7ba5-09a2-4707-9e29-7b319274246a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-lzmms" [cbcf7ba5-09a2-4707-9e29-7b319274246a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.008712333s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-859000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-859000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-859000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (64.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p calico-859000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p calico-859000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker : (1m4.761505701s)
--- PASS: TestNetworkPlugins/group/calico/Start (64.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-tv5j7" [80ccca35-9534-48fd-ad2c-af0e4f5c6ba8] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.019043024s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p calico-859000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (13.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-859000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-krzn7" [ae5386a7-73cc-475a-933a-55e9d2036c51] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0717 13:32:33.575578   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/addons-702000/client.crt: no such file or directory
E0717 13:32:36.438514   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/functional-625000/client.crt: no such file or directory
helpers_test.go:344: "netcat-7458db8b8-krzn7" [ae5386a7-73cc-475a-933a-55e9d2036c51] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 13.007055048s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (13.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-859000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-859000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-859000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (50.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p custom-flannel-859000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p custom-flannel-859000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker : (50.526604669s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (50.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (36.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p false-859000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p false-859000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker : (36.909123879s)
--- PASS: TestNetworkPlugins/group/false/Start (36.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p custom-flannel-859000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-859000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-82vsh" [16476035-847e-4c2a-8ecd-cf320537147f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-82vsh" [16476035-847e-4c2a-8ecd-cf320537147f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.007781965s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p false-859000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (13.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-859000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-wtqp4" [43cb85b0-3412-4854-8aeb-45e92bab278b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-wtqp4" [43cb85b0-3412-4854-8aeb-45e92bab278b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 13.009083114s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (13.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-859000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-859000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-859000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-859000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-859000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-859000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (37.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p enable-default-cni-859000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p enable-default-cni-859000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker : (37.799786555s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (37.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (50.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p flannel-859000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker 
E0717 13:34:39.090774   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/auto-859000/client.crt: no such file or directory
E0717 13:34:59.571919   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/auto-859000/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p flannel-859000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker : (50.579515308s)
--- PASS: TestNetworkPlugins/group/flannel/Start (50.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p enable-default-cni-859000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-859000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-qhvm2" [46941d49-303e-4fe8-abe6-6921e429dbfe] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-qhvm2" [46941d49-303e-4fe8-abe6-6921e429dbfe] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 13.009406402s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-859000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-859000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-859000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-rwl6j" [8a1b43a7-a0c1-4feb-8988-99ba3dfd2700] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.015000535s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p flannel-859000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (12.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-859000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-f5mv9" [66e04b95-46ab-4c46-a533-4427a0aca501] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-f5mv9" [66e04b95-46ab-4c46-a533-4427a0aca501] Running
E0717 13:35:40.534275   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/auto-859000/client.crt: no such file or directory
E0717 13:35:41.842262   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/kindnet-859000/client.crt: no such file or directory
E0717 13:35:41.847385   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/kindnet-859000/client.crt: no such file or directory
E0717 13:35:41.857453   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/kindnet-859000/client.crt: no such file or directory
E0717 13:35:41.878543   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/kindnet-859000/client.crt: no such file or directory
E0717 13:35:41.918700   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/kindnet-859000/client.crt: no such file or directory
E0717 13:35:42.000248   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/kindnet-859000/client.crt: no such file or directory
E0717 13:35:42.184762   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/kindnet-859000/client.crt: no such file or directory
E0717 13:35:42.505012   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/kindnet-859000/client.crt: no such file or directory
E0717 13:35:43.146007   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/kindnet-859000/client.crt: no such file or directory
E0717 13:35:44.427181   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/kindnet-859000/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 12.007390508s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (12.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-859000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-859000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-859000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (36.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p bridge-859000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker 
E0717 13:35:52.150269   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/kindnet-859000/client.crt: no such file or directory
E0717 13:36:02.390483   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/kindnet-859000/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p bridge-859000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker : (36.817352313s)
--- PASS: TestNetworkPlugins/group/bridge/Start (36.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (45.97s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p kubenet-859000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker 
E0717 13:36:22.871613   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/kindnet-859000/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p kubenet-859000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker : (45.967381601s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (45.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p bridge-859000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (13.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-859000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-kpwcc" [47d75d44-e512-40e5-b8b7-80eeb7d96e17] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-kpwcc" [47d75d44-e512-40e5-b8b7-80eeb7d96e17] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 13.009783064s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (13.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-859000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-859000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-859000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p kubenet-859000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (13.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-859000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-shbtx" [f07ffbf6-ba71-40f9-bb17-b303ebef9575] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-shbtx" [f07ffbf6-ba71-40f9-bb17-b303ebef9575] Running
E0717 13:37:03.831994   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/kindnet-859000/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 13.010311946s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (13.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-859000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-859000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-859000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (78.63s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p no-preload-148000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.27.3
E0717 13:37:33.576324   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/addons-702000/client.crt: no such file or directory
E0717 13:37:35.584307   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/calico-859000/client.crt: no such file or directory
E0717 13:37:36.439432   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/functional-625000/client.crt: no such file or directory
E0717 13:37:45.825830   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/calico-859000/client.crt: no such file or directory
E0717 13:38:06.306241   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/calico-859000/client.crt: no such file or directory
E0717 13:38:25.754451   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/kindnet-859000/client.crt: no such file or directory
E0717 13:38:47.266468   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/calico-859000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p no-preload-148000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.27.3: (1m18.631758718s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (78.63s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.32s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-148000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [cedf17f2-6a36-420d-b76b-539cb01f3f98] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [cedf17f2-6a36-420d-b76b-539cb01f3f98] Running
E0717 13:38:58.390605   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/custom-flannel-859000/client.crt: no such file or directory
E0717 13:38:58.395785   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/custom-flannel-859000/client.crt: no such file or directory
E0717 13:38:58.406127   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/custom-flannel-859000/client.crt: no such file or directory
E0717 13:38:58.426388   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/custom-flannel-859000/client.crt: no such file or directory
E0717 13:38:58.467484   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/custom-flannel-859000/client.crt: no such file or directory
E0717 13:38:58.549619   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/custom-flannel-859000/client.crt: no such file or directory
E0717 13:38:58.709859   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/custom-flannel-859000/client.crt: no such file or directory
E0717 13:38:59.030954   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/custom-flannel-859000/client.crt: no such file or directory
E0717 13:38:59.672563   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/custom-flannel-859000/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.013749718s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-148000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.32s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p no-preload-148000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0717 13:39:00.792874   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/false-859000/client.crt: no such file or directory
E0717 13:39:00.798045   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/false-859000/client.crt: no such file or directory
E0717 13:39:00.808179   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/false-859000/client.crt: no such file or directory
E0717 13:39:00.828298   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/false-859000/client.crt: no such file or directory
E0717 13:39:00.868407   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/false-859000/client.crt: no such file or directory
E0717 13:39:00.948677   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/false-859000/client.crt: no such file or directory
E0717 13:39:00.953061   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/custom-flannel-859000/client.crt: no such file or directory
start_stop_delete_test.go:205: (dbg) Done: out/minikube-darwin-amd64 addons enable metrics-server -p no-preload-148000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.12367956s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-148000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (10.83s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p no-preload-148000 --alsologtostderr -v=3
E0717 13:39:01.109671   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/false-859000/client.crt: no such file or directory
E0717 13:39:01.430581   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/false-859000/client.crt: no such file or directory
E0717 13:39:02.071620   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/false-859000/client.crt: no such file or directory
E0717 13:39:03.353880   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/false-859000/client.crt: no such file or directory
E0717 13:39:03.513402   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/custom-flannel-859000/client.crt: no such file or directory
E0717 13:39:05.915622   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/false-859000/client.crt: no such file or directory
E0717 13:39:08.634169   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/custom-flannel-859000/client.crt: no such file or directory
E0717 13:39:11.037877   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/false-859000/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p no-preload-148000 --alsologtostderr -v=3: (10.830383571s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (10.83s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.42s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-148000 -n no-preload-148000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-148000 -n no-preload-148000: exit status 7 (108.853261ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p no-preload-148000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.42s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (332.94s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p no-preload-148000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.27.3
E0717 13:39:18.603178   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/auto-859000/client.crt: no such file or directory
E0717 13:39:18.874388   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/custom-flannel-859000/client.crt: no such file or directory
E0717 13:39:21.278384   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/false-859000/client.crt: no such file or directory
E0717 13:39:32.012255   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/skaffold-508000/client.crt: no such file or directory
E0717 13:39:39.355409   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/custom-flannel-859000/client.crt: no such file or directory
E0717 13:39:41.758905   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/false-859000/client.crt: no such file or directory
E0717 13:39:46.295713   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/auto-859000/client.crt: no such file or directory
E0717 13:40:09.199317   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/calico-859000/client.crt: no such file or directory
E0717 13:40:12.776778   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/enable-default-cni-859000/client.crt: no such file or directory
E0717 13:40:12.781970   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/enable-default-cni-859000/client.crt: no such file or directory
E0717 13:40:12.792226   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/enable-default-cni-859000/client.crt: no such file or directory
E0717 13:40:12.812694   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/enable-default-cni-859000/client.crt: no such file or directory
E0717 13:40:12.852846   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/enable-default-cni-859000/client.crt: no such file or directory
E0717 13:40:12.932935   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/enable-default-cni-859000/client.crt: no such file or directory
E0717 13:40:13.095080   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/enable-default-cni-859000/client.crt: no such file or directory
E0717 13:40:13.417298   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/enable-default-cni-859000/client.crt: no such file or directory
E0717 13:40:14.057718   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/enable-default-cni-859000/client.crt: no such file or directory
E0717 13:40:15.339871   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/enable-default-cni-859000/client.crt: no such file or directory
E0717 13:40:17.901191   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/enable-default-cni-859000/client.crt: no such file or directory
E0717 13:40:20.328986   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/custom-flannel-859000/client.crt: no such file or directory
E0717 13:40:22.734889   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/false-859000/client.crt: no such file or directory
E0717 13:40:23.022131   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/enable-default-cni-859000/client.crt: no such file or directory
E0717 13:40:27.742456   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/flannel-859000/client.crt: no such file or directory
E0717 13:40:27.748912   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/flannel-859000/client.crt: no such file or directory
E0717 13:40:27.760808   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/flannel-859000/client.crt: no such file or directory
E0717 13:40:27.782392   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/flannel-859000/client.crt: no such file or directory
E0717 13:40:27.822615   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/flannel-859000/client.crt: no such file or directory
E0717 13:40:27.902785   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/flannel-859000/client.crt: no such file or directory
E0717 13:40:28.063410   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/flannel-859000/client.crt: no such file or directory
E0717 13:40:28.384352   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/flannel-859000/client.crt: no such file or directory
E0717 13:40:29.026545   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/flannel-859000/client.crt: no such file or directory
E0717 13:40:30.307014   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/flannel-859000/client.crt: no such file or directory
E0717 13:40:32.867494   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/flannel-859000/client.crt: no such file or directory
E0717 13:40:33.263096   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/enable-default-cni-859000/client.crt: no such file or directory
E0717 13:40:37.989630   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/flannel-859000/client.crt: no such file or directory
E0717 13:40:41.858580   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/kindnet-859000/client.crt: no such file or directory
E0717 13:40:48.230314   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/flannel-859000/client.crt: no such file or directory
E0717 13:40:53.743982   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/enable-default-cni-859000/client.crt: no such file or directory
E0717 13:40:55.079192   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/skaffold-508000/client.crt: no such file or directory
E0717 13:41:08.711203   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/flannel-859000/client.crt: no such file or directory
E0717 13:41:09.611129   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/kindnet-859000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p no-preload-148000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.27.3: (5m32.5306396s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-148000 -n no-preload-148000
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (332.94s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (1.52s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p old-k8s-version-378000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p old-k8s-version-378000 --alsologtostderr -v=3: (1.524600862s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (1.52s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.41s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-378000 -n old-k8s-version-378000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-378000 -n old-k8s-version-378000: exit status 7 (92.769919ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p old-k8s-version-378000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.41s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (14.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-v28tb" [079b7bb8-e9b4-4794-b966-c568b976d488] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-v28tb" [079b7bb8-e9b4-4794-b966-c568b976d488] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 14.015080058s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (14.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-v28tb" [079b7bb8-e9b4-4794-b966-c568b976d488] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.008640437s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-148000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.41s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p no-preload-148000 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.41s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.94s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p no-preload-148000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-148000 -n no-preload-148000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-148000 -n no-preload-148000: exit status 2 (373.443344ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-148000 -n no-preload-148000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-148000 -n no-preload-148000: exit status 2 (375.527155ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p no-preload-148000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-148000 -n no-preload-148000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-148000 -n no-preload-148000
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.94s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (50.53s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p embed-certs-688000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.27.3
E0717 13:45:12.778778   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/enable-default-cni-859000/client.crt: no such file or directory
E0717 13:45:27.741108   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/flannel-859000/client.crt: no such file or directory
E0717 13:45:40.464722   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/enable-default-cni-859000/client.crt: no such file or directory
E0717 13:45:41.858145   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/kindnet-859000/client.crt: no such file or directory
E0717 13:45:55.433163   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/flannel-859000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p embed-certs-688000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.27.3: (50.53053416s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (50.53s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.33s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-688000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [c3fde1fa-b1b8-445a-b83b-488d6f7a4f6f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [c3fde1fa-b1b8-445a-b83b-488d6f7a4f6f] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.013219307s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-688000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.33s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p embed-certs-688000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-darwin-amd64 addons enable metrics-server -p embed-certs-688000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.116788847s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-688000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (10.9s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p embed-certs-688000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p embed-certs-688000 --alsologtostderr -v=3: (10.897105016s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (10.90s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.4s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-688000 -n embed-certs-688000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-688000 -n embed-certs-688000: exit status 7 (93.903296ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p embed-certs-688000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.40s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (337.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p embed-certs-688000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.27.3
E0717 13:46:25.569037   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/bridge-859000/client.crt: no such file or directory
E0717 13:46:53.256482   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/bridge-859000/client.crt: no such file or directory
E0717 13:46:56.291609   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/kubenet-859000/client.crt: no such file or directory
E0717 13:47:19.517132   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/functional-625000/client.crt: no such file or directory
E0717 13:47:23.980219   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/kubenet-859000/client.crt: no such file or directory
E0717 13:47:25.356569   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/calico-859000/client.crt: no such file or directory
E0717 13:47:33.591333   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/addons-702000/client.crt: no such file or directory
E0717 13:47:36.455071   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/functional-625000/client.crt: no such file or directory
E0717 13:48:50.805449   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/no-preload-148000/client.crt: no such file or directory
E0717 13:48:50.811356   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/no-preload-148000/client.crt: no such file or directory
E0717 13:48:50.821815   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/no-preload-148000/client.crt: no such file or directory
E0717 13:48:50.843990   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/no-preload-148000/client.crt: no such file or directory
E0717 13:48:50.886117   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/no-preload-148000/client.crt: no such file or directory
E0717 13:48:50.966499   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/no-preload-148000/client.crt: no such file or directory
E0717 13:48:51.126992   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/no-preload-148000/client.crt: no such file or directory
E0717 13:48:51.447275   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/no-preload-148000/client.crt: no such file or directory
E0717 13:48:52.088066   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/no-preload-148000/client.crt: no such file or directory
E0717 13:48:53.370310   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/no-preload-148000/client.crt: no such file or directory
E0717 13:48:55.930556   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/no-preload-148000/client.crt: no such file or directory
E0717 13:48:58.405370   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/custom-flannel-859000/client.crt: no such file or directory
E0717 13:49:00.807630   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/false-859000/client.crt: no such file or directory
E0717 13:49:01.052397   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/no-preload-148000/client.crt: no such file or directory
E0717 13:49:11.292586   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/no-preload-148000/client.crt: no such file or directory
E0717 13:49:18.617219   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/auto-859000/client.crt: no such file or directory
E0717 13:49:31.772813   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/no-preload-148000/client.crt: no such file or directory
E0717 13:49:32.028007   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/skaffold-508000/client.crt: no such file or directory
E0717 13:50:12.734969   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/no-preload-148000/client.crt: no such file or directory
E0717 13:50:12.778168   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/enable-default-cni-859000/client.crt: no such file or directory
E0717 13:50:27.740801   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/flannel-859000/client.crt: no such file or directory
E0717 13:50:41.670412   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/auto-859000/client.crt: no such file or directory
E0717 13:50:41.857157   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/kindnet-859000/client.crt: no such file or directory
E0717 13:51:25.568271   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/bridge-859000/client.crt: no such file or directory
E0717 13:51:34.655421   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/no-preload-148000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p embed-certs-688000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.27.3: (5m36.588182095s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-688000 -n embed-certs-688000
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (337.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (18.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-5fs5t" [163669af-bbb2-4d34-99b2-0fbe82dfe86d] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0717 13:52:04.970771   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/kindnet-859000/client.crt: no such file or directory
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-5fs5t" [163669af-bbb2-4d34-99b2-0fbe82dfe86d] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 18.014584895s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (18.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-5fs5t" [163669af-bbb2-4d34-99b2-0fbe82dfe86d] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.008044282s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-688000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.41s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p embed-certs-688000 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.41s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.94s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p embed-certs-688000 --alsologtostderr -v=1
E0717 13:52:25.355514   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/calico-859000/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-688000 -n embed-certs-688000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-688000 -n embed-certs-688000: exit status 2 (374.792222ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-688000 -n embed-certs-688000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-688000 -n embed-certs-688000: exit status 2 (372.988695ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p embed-certs-688000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-688000 -n embed-certs-688000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-688000 -n embed-certs-688000
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.94s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (50.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p default-k8s-diff-port-981000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.27.3
E0717 13:52:33.591357   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/addons-702000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p default-k8s-diff-port-981000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.27.3: (50.096460081s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (50.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.33s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-981000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [34fc9029-f935-4b3d-a789-d276a704dcc8] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [34fc9029-f935-4b3d-a789-d276a704dcc8] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.015571954s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-981000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.33s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p default-k8s-diff-port-981000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-darwin-amd64 addons enable metrics-server -p default-k8s-diff-port-981000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.166267777s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-981000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (10.86s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p default-k8s-diff-port-981000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p default-k8s-diff-port-981000 --alsologtostderr -v=3: (10.860614636s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (10.86s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.4s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-981000 -n default-k8s-diff-port-981000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-981000 -n default-k8s-diff-port-981000: exit status 7 (92.821899ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p default-k8s-diff-port-981000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.40s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (328.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p default-k8s-diff-port-981000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.27.3
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p default-k8s-diff-port-981000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.27.3: (5m27.994067866s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-981000 -n default-k8s-diff-port-981000
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (328.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (14.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-xmhcc" [8c93aad7-e49a-470a-818b-9a2f90ccd2ef] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-xmhcc" [8c93aad7-e49a-470a-818b-9a2f90ccd2ef] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 14.012841342s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (14.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-xmhcc" [8c93aad7-e49a-470a-818b-9a2f90ccd2ef] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.009939513s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-981000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.41s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p default-k8s-diff-port-981000 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.41s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.04s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p default-k8s-diff-port-981000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-981000 -n default-k8s-diff-port-981000
E0717 13:59:32.028849   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/skaffold-508000/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-981000 -n default-k8s-diff-port-981000: exit status 2 (430.57707ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-981000 -n default-k8s-diff-port-981000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-981000 -n default-k8s-diff-port-981000: exit status 2 (375.21458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p default-k8s-diff-port-981000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-981000 -n default-k8s-diff-port-981000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-981000 -n default-k8s-diff-port-981000
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.04s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (34.6s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p newest-cni-321000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --kubernetes-version=v1.27.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p newest-cni-321000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --kubernetes-version=v1.27.3: (34.598640033s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (34.60s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p newest-cni-321000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0717 14:00:12.782191   38325 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16890-37879/.minikube/profiles/enable-default-cni-859000/client.crt: no such file or directory
start_stop_delete_test.go:205: (dbg) Done: out/minikube-darwin-amd64 addons enable metrics-server -p newest-cni-321000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.263687551s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (11.38s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p newest-cni-321000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p newest-cni-321000 --alsologtostderr -v=3: (11.377920834s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (11.38s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.4s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-321000 -n newest-cni-321000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-321000 -n newest-cni-321000: exit status 7 (93.446721ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p newest-cni-321000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.40s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (29.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p newest-cni-321000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --kubernetes-version=v1.27.3
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p newest-cni-321000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --kubernetes-version=v1.27.3: (28.854502448s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-321000 -n newest-cni-321000
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (29.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.42s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p newest-cni-321000 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.42s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.64s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p newest-cni-321000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-321000 -n newest-cni-321000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-321000 -n newest-cni-321000: exit status 2 (384.603019ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-321000 -n newest-cni-321000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-321000 -n newest-cni-321000: exit status 2 (375.619554ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p newest-cni-321000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-321000 -n newest-cni-321000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-321000 -n newest-cni-321000
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.64s)

                                                
                                    

Test skip (19/317)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.27.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.27.3/binaries (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Registry (18.13s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:306: registry stabilized in 11.117754ms
addons_test.go:308: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-k4jbl" [375e86fc-3adb-44c7-a42f-e5c468af6b09] Running
addons_test.go:308: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.014434578s
addons_test.go:311: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-292h6" [8a2d0753-2f1a-4320-a56e-9680032cef3d] Running
addons_test.go:311: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.010239148s
addons_test.go:316: (dbg) Run:  kubectl --context addons-702000 delete po -l run=registry-test --now
addons_test.go:321: (dbg) Run:  kubectl --context addons-702000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:321: (dbg) Done: kubectl --context addons-702000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (7.978551268s)
addons_test.go:331: Unable to complete rest of the test due to connectivity assumptions
--- SKIP: TestAddons/parallel/Registry (18.13s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (12.35s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:183: (dbg) Run:  kubectl --context addons-702000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:208: (dbg) Run:  kubectl --context addons-702000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:221: (dbg) Run:  kubectl --context addons-702000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:226: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [5a380c05-3eac-40fe-a070-a24ae376dc2e] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [5a380c05-3eac-40fe-a070-a24ae376dc2e] Running
addons_test.go:226: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.010731723s
addons_test.go:238: (dbg) Run:  out/minikube-darwin-amd64 -p addons-702000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:258: skipping ingress DNS test for any combination that needs port forwarding
--- SKIP: TestAddons/parallel/Ingress (12.35s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:474: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true darwin amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (14.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1628: (dbg) Run:  kubectl --context functional-625000 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-625000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-6fb669fc84-pzdv2" [9add6e4a-dec3-4331-be22-0fa3a6e9277e] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-6fb669fc84-pzdv2" [9add6e4a-dec3-4331-be22-0fa3a6e9277e] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 14.008639074s
functional_test.go:1645: test is broken for port-forwarded drivers: https://github.com/kubernetes/minikube/issues/7383
--- SKIP: TestFunctional/parallel/ServiceCmdConnect (14.15s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:522: 
----------------------- debugLogs start: cilium-859000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-859000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-859000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-859000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-859000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-859000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-859000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-859000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-859000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-859000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-859000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-859000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-859000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-859000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-859000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-859000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-859000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-859000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-859000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-859000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-859000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-859000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-859000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-859000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-859000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-859000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-859000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-859000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-859000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-859000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-859000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-859000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-859000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-859000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-859000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-859000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-859000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-859000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-859000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-859000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-859000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-859000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-859000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-859000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-859000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-859000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-859000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-859000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-859000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-859000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-859000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-859000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-859000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-859000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-859000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-859000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-859000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-859000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-859000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-859000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-859000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-859000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-859000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-859000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-859000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-859000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-859000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-859000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-859000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-859000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-859000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-859000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-859000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-859000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-859000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-859000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-859000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-859000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-859000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-859000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-859000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-859000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-859000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-859000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-859000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-859000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-859000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-859000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-859000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-859000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-859000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-859000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-859000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-859000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-859000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-859000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-859000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-859000"

                                                
                                                
----------------------- debugLogs end: cilium-859000 [took: 5.484108686s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-859000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cilium-859000
--- SKIP: TestNetworkPlugins/group/cilium (5.87s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.37s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-782000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p disable-driver-mounts-782000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.37s)

                                                
                                    
Copied to clipboard