Test Report: Docker_macOS 15565

                    
                      1d3af1f8d84ef2968d5cd5a44b845e58482fc59d:2023-01-28:27630
                    
                

Test fail (17/306)

x
+
TestErrorSpam/setup (28.52s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -p nospam-254000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-254000 --driver=docker 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -p nospam-254000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-254000 --driver=docker : (28.521972621s)
error_spam_test.go:96: unexpected stderr: "! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.29.0 -> Actual minikube version: v1.29.0-1674856271-15565"
error_spam_test.go:110: minikube stdout:
* [nospam-254000] minikube v1.29.0-1674856271-15565 on Darwin 13.2
- MINIKUBE_LOCATION=15565
- KUBECONFIG=/Users/jenkins/minikube-integration/15565-24808/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-24808/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the docker driver based on user configuration
* Using Docker Desktop driver with root privileges
* Starting control plane node nospam-254000 in cluster nospam-254000
* Pulling base image ...
* Creating docker container (CPUs=2, Memory=2250MB) ...
* Preparing Kubernetes v1.26.1 on Docker 20.10.23 ...
- Generating certificates and keys ...
- Booting up control plane ...
- Configuring RBAC rules ...
* Configuring bridge CNI (Container Networking Interface) ...
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
* Verifying Kubernetes components...
* Enabled addons: storage-provisioner, default-storageclass
* Done! kubectl is now configured to use "nospam-254000" cluster and "default" namespace by default
error_spam_test.go:111: minikube stderr:
! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.29.0 -> Actual minikube version: v1.29.0-1674856271-15565
--- FAIL: TestErrorSpam/setup (28.52s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2215: (dbg) Run:  out/minikube-darwin-amd64 license

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2215: (dbg) Non-zero exit: out/minikube-darwin-amd64 license: exit status 40 (269.653426ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to INET_LICENSES: download request did not return a 200, received: 404
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_config_8726cae15f99b94c9f6c9c6f69cb2fb49584395b_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2216: command "\n\n" failed: exit status 40
--- FAIL: TestFunctional/parallel/License (0.27s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (256.94s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-darwin-amd64 start -p ingress-addon-legacy-901000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker 
E0128 10:49:47.370234   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/addons-582000/client.crt: no such file or directory
E0128 10:52:03.492497   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/addons-582000/client.crt: no such file or directory
E0128 10:52:07.337931   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/functional-251000/client.crt: no such file or directory
E0128 10:52:07.343935   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/functional-251000/client.crt: no such file or directory
E0128 10:52:07.356163   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/functional-251000/client.crt: no such file or directory
E0128 10:52:07.378353   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/functional-251000/client.crt: no such file or directory
E0128 10:52:07.418931   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/functional-251000/client.crt: no such file or directory
E0128 10:52:07.499409   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/functional-251000/client.crt: no such file or directory
E0128 10:52:07.659662   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/functional-251000/client.crt: no such file or directory
E0128 10:52:07.981841   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/functional-251000/client.crt: no such file or directory
E0128 10:52:08.623071   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/functional-251000/client.crt: no such file or directory
E0128 10:52:09.904315   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/functional-251000/client.crt: no such file or directory
E0128 10:52:12.466604   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/functional-251000/client.crt: no such file or directory
E0128 10:52:17.589045   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/functional-251000/client.crt: no such file or directory
E0128 10:52:27.831089   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/functional-251000/client.crt: no such file or directory
E0128 10:52:31.262830   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/addons-582000/client.crt: no such file or directory
E0128 10:52:48.313666   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/functional-251000/client.crt: no such file or directory
E0128 10:53:29.274832   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/functional-251000/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p ingress-addon-legacy-901000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker : exit status 109 (4m16.905309993s)

                                                
                                                
-- stdout --
	* [ingress-addon-legacy-901000] minikube v1.29.0-1674856271-15565 on Darwin 13.2
	  - MINIKUBE_LOCATION=15565
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15565-24808/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-24808/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node ingress-addon-legacy-901000 in cluster ingress-addon-legacy-901000
	* Pulling base image ...
	* Downloading Kubernetes v1.18.20 preload ...
	* Creating docker container (CPUs=2, Memory=4096MB) ...
	* Preparing Kubernetes v1.18.20 on Docker 20.10.23 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0128 10:49:23.670153   29018 out.go:296] Setting OutFile to fd 1 ...
	I0128 10:49:23.670317   29018 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0128 10:49:23.670322   29018 out.go:309] Setting ErrFile to fd 2...
	I0128 10:49:23.670326   29018 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0128 10:49:23.670442   29018 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15565-24808/.minikube/bin
	I0128 10:49:23.670964   29018 out.go:303] Setting JSON to false
	I0128 10:49:23.689153   29018 start.go:125] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":6538,"bootTime":1674925225,"procs":390,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0128 10:49:23.689242   29018 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0128 10:49:23.711566   29018 out.go:177] * [ingress-addon-legacy-901000] minikube v1.29.0-1674856271-15565 on Darwin 13.2
	I0128 10:49:23.755289   29018 notify.go:220] Checking for updates...
	I0128 10:49:23.777079   29018 out.go:177]   - MINIKUBE_LOCATION=15565
	I0128 10:49:23.798167   29018 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15565-24808/kubeconfig
	I0128 10:49:23.819420   29018 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0128 10:49:23.841249   29018 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0128 10:49:23.863422   29018 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-24808/.minikube
	I0128 10:49:23.885392   29018 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0128 10:49:23.907370   29018 driver.go:365] Setting default libvirt URI to qemu:///system
	I0128 10:49:23.967975   29018 docker.go:141] docker version: linux-20.10.22:Docker Desktop 4.16.2 (95914)
	I0128 10:49:23.968108   29018 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0128 10:49:24.108742   29018 info.go:266] docker info: {ID:O4T2:OPZV:SRVN:GAA5:IEOW:TFNR:UPTT:2CKV:GY3E:3NIN:VS7C:3BCB Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:false NGoroutines:52 SystemTime:2023-01-28 18:49:24.017318884 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0128 10:49:24.130739   29018 out.go:177] * Using the docker driver based on user configuration
	I0128 10:49:24.173423   29018 start.go:296] selected driver: docker
	I0128 10:49:24.173448   29018 start.go:857] validating driver "docker" against <nil>
	I0128 10:49:24.173470   29018 start.go:868] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0128 10:49:24.177743   29018 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0128 10:49:24.318656   29018 info.go:266] docker info: {ID:O4T2:OPZV:SRVN:GAA5:IEOW:TFNR:UPTT:2CKV:GY3E:3NIN:VS7C:3BCB Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:false NGoroutines:52 SystemTime:2023-01-28 18:49:24.227605833 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0128 10:49:24.318818   29018 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0128 10:49:24.319029   29018 start_flags.go:917] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0128 10:49:24.340927   29018 out.go:177] * Using Docker Desktop driver with root privileges
	I0128 10:49:24.362648   29018 cni.go:84] Creating CNI manager for ""
	I0128 10:49:24.362689   29018 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0128 10:49:24.362707   29018 start_flags.go:319] config:
	{Name:ingress-addon-legacy-901000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-901000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:doc
ker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0128 10:49:24.405618   29018 out.go:177] * Starting control plane node ingress-addon-legacy-901000 in cluster ingress-addon-legacy-901000
	I0128 10:49:24.426703   29018 cache.go:120] Beginning downloading kic base image for docker with docker
	I0128 10:49:24.448468   29018 out.go:177] * Pulling base image ...
	I0128 10:49:24.490802   29018 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0128 10:49:24.490873   29018 image.go:77] Checking for gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 in local docker daemon
	I0128 10:49:24.545277   29018 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
	I0128 10:49:24.545300   29018 cache.go:57] Caching tarball of preloaded images
	I0128 10:49:24.545497   29018 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0128 10:49:24.547083   29018 image.go:81] Found gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 in local docker daemon, skipping pull
	I0128 10:49:24.566506   29018 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0128 10:49:24.566535   29018 cache.go:143] gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 exists in daemon, skipping load
	I0128 10:49:24.608799   29018 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I0128 10:49:24.684573   29018 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4?checksum=md5:ff35f06d4f6c0bac9297b8f85d8ebf70 -> /Users/jenkins/minikube-integration/15565-24808/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
	I0128 10:49:29.168083   29018 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I0128 10:49:29.168287   29018 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/15565-24808/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I0128 10:49:29.792698   29018 cache.go:60] Finished verifying existence of preloaded tar for  v1.18.20 on docker
	I0128 10:49:29.792947   29018 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/ingress-addon-legacy-901000/config.json ...
	I0128 10:49:29.792973   29018 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/ingress-addon-legacy-901000/config.json: {Name:mkee0c10f8c4c62600db4e7b1798993c7f671dc0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0128 10:49:29.793231   29018 cache.go:193] Successfully downloaded all kic artifacts
	I0128 10:49:29.793258   29018 start.go:364] acquiring machines lock for ingress-addon-legacy-901000: {Name:mk94f450133305776b843d6609cefcba6c7216ea Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0128 10:49:29.793348   29018 start.go:368] acquired machines lock for "ingress-addon-legacy-901000" in 79.673µs
	I0128 10:49:29.793370   29018 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-901000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-901000 Namespace:default APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disabl
eMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0128 10:49:29.793415   29018 start.go:125] createHost starting for "" (driver="docker")
	I0128 10:49:29.815773   29018 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0128 10:49:29.815966   29018 start.go:159] libmachine.API.Create for "ingress-addon-legacy-901000" (driver="docker")
	I0128 10:49:29.815994   29018 client.go:168] LocalClient.Create starting
	I0128 10:49:29.816108   29018 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15565-24808/.minikube/certs/ca.pem
	I0128 10:49:29.816152   29018 main.go:141] libmachine: Decoding PEM data...
	I0128 10:49:29.816166   29018 main.go:141] libmachine: Parsing certificate...
	I0128 10:49:29.816242   29018 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15565-24808/.minikube/certs/cert.pem
	I0128 10:49:29.816282   29018 main.go:141] libmachine: Decoding PEM data...
	I0128 10:49:29.816291   29018 main.go:141] libmachine: Parsing certificate...
	I0128 10:49:29.837321   29018 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-901000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0128 10:49:29.894513   29018 cli_runner.go:211] docker network inspect ingress-addon-legacy-901000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0128 10:49:29.894628   29018 network_create.go:281] running [docker network inspect ingress-addon-legacy-901000] to gather additional debugging logs...
	I0128 10:49:29.894647   29018 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-901000
	W0128 10:49:29.948020   29018 cli_runner.go:211] docker network inspect ingress-addon-legacy-901000 returned with exit code 1
	I0128 10:49:29.948049   29018 network_create.go:284] error running [docker network inspect ingress-addon-legacy-901000]: docker network inspect ingress-addon-legacy-901000: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: ingress-addon-legacy-901000
	I0128 10:49:29.948061   29018 network_create.go:286] output of [docker network inspect ingress-addon-legacy-901000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: ingress-addon-legacy-901000
	
	** /stderr **
	I0128 10:49:29.948156   29018 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0128 10:49:30.001567   29018 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000d97200}
	I0128 10:49:30.001599   29018 network_create.go:123] attempt to create docker network ingress-addon-legacy-901000 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0128 10:49:30.001677   29018 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-901000 ingress-addon-legacy-901000
	I0128 10:49:30.088414   29018 network_create.go:107] docker network ingress-addon-legacy-901000 192.168.49.0/24 created
	I0128 10:49:30.088456   29018 kic.go:117] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-901000" container
	I0128 10:49:30.088569   29018 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0128 10:49:30.142279   29018 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-901000 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-901000 --label created_by.minikube.sigs.k8s.io=true
	I0128 10:49:30.195829   29018 oci.go:103] Successfully created a docker volume ingress-addon-legacy-901000
	I0128 10:49:30.195955   29018 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-901000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-901000 --entrypoint /usr/bin/test -v ingress-addon-legacy-901000:/var gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 -d /var/lib
	I0128 10:49:30.633711   29018 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-901000
	I0128 10:49:30.633744   29018 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0128 10:49:30.633760   29018 kic.go:190] Starting extracting preloaded images to volume ...
	I0128 10:49:30.633871   29018 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15565-24808/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-901000:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 -I lz4 -xf /preloaded.tar -C /extractDir
	I0128 10:49:36.971109   29018 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15565-24808/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-901000:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 -I lz4 -xf /preloaded.tar -C /extractDir: (6.337131457s)
	I0128 10:49:36.971134   29018 kic.go:199] duration metric: took 6.337338 seconds to extract preloaded images to volume
	I0128 10:49:36.971257   29018 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0128 10:49:37.112343   29018 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-901000 --name ingress-addon-legacy-901000 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-901000 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-901000 --network ingress-addon-legacy-901000 --ip 192.168.49.2 --volume ingress-addon-legacy-901000:/var --security-opt apparmor=unconfined --memory=4096mb --memory-swap=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15
	I0128 10:49:37.470799   29018 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-901000 --format={{.State.Running}}
	I0128 10:49:37.531333   29018 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-901000 --format={{.State.Status}}
	I0128 10:49:37.593780   29018 cli_runner.go:164] Run: docker exec ingress-addon-legacy-901000 stat /var/lib/dpkg/alternatives/iptables
	I0128 10:49:37.705522   29018 oci.go:144] the created container "ingress-addon-legacy-901000" has a running status.
	I0128 10:49:37.705561   29018 kic.go:221] Creating ssh key for kic: /Users/jenkins/minikube-integration/15565-24808/.minikube/machines/ingress-addon-legacy-901000/id_rsa...
	I0128 10:49:37.870765   29018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-24808/.minikube/machines/ingress-addon-legacy-901000/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0128 10:49:37.870857   29018 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/15565-24808/.minikube/machines/ingress-addon-legacy-901000/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0128 10:49:37.975284   29018 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-901000 --format={{.State.Status}}
	I0128 10:49:38.031784   29018 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0128 10:49:38.031805   29018 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-901000 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0128 10:49:38.133345   29018 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-901000 --format={{.State.Status}}
	I0128 10:49:38.189431   29018 machine.go:88] provisioning docker machine ...
	I0128 10:49:38.189469   29018 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-901000"
	I0128 10:49:38.189577   29018 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-901000
	I0128 10:49:38.246734   29018 main.go:141] libmachine: Using SSH client type: native
	I0128 10:49:38.246946   29018 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 58333 <nil> <nil>}
	I0128 10:49:38.246969   29018 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-901000 && echo "ingress-addon-legacy-901000" | sudo tee /etc/hostname
	I0128 10:49:38.389936   29018 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-901000
	
	I0128 10:49:38.390031   29018 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-901000
	I0128 10:49:38.448378   29018 main.go:141] libmachine: Using SSH client type: native
	I0128 10:49:38.448532   29018 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 58333 <nil> <nil>}
	I0128 10:49:38.448548   29018 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-901000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-901000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-901000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0128 10:49:38.580278   29018 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0128 10:49:38.580301   29018 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15565-24808/.minikube CaCertPath:/Users/jenkins/minikube-integration/15565-24808/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15565-24808/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15565-24808/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15565-24808/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15565-24808/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15565-24808/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15565-24808/.minikube}
	I0128 10:49:38.580322   29018 ubuntu.go:177] setting up certificates
	I0128 10:49:38.580331   29018 provision.go:83] configureAuth start
	I0128 10:49:38.580413   29018 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-901000
	I0128 10:49:38.636096   29018 provision.go:138] copyHostCerts
	I0128 10:49:38.636141   29018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-24808/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/15565-24808/.minikube/ca.pem
	I0128 10:49:38.636198   29018 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-24808/.minikube/ca.pem, removing ...
	I0128 10:49:38.636204   29018 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-24808/.minikube/ca.pem
	I0128 10:49:38.636325   29018 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-24808/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15565-24808/.minikube/ca.pem (1082 bytes)
	I0128 10:49:38.636487   29018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-24808/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/15565-24808/.minikube/cert.pem
	I0128 10:49:38.636518   29018 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-24808/.minikube/cert.pem, removing ...
	I0128 10:49:38.636523   29018 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-24808/.minikube/cert.pem
	I0128 10:49:38.636590   29018 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-24808/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15565-24808/.minikube/cert.pem (1123 bytes)
	I0128 10:49:38.636696   29018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-24808/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/15565-24808/.minikube/key.pem
	I0128 10:49:38.636736   29018 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-24808/.minikube/key.pem, removing ...
	I0128 10:49:38.636740   29018 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-24808/.minikube/key.pem
	I0128 10:49:38.636803   29018 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-24808/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15565-24808/.minikube/key.pem (1675 bytes)
	I0128 10:49:38.636918   29018 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15565-24808/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15565-24808/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15565-24808/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-901000 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-901000]
	I0128 10:49:38.712260   29018 provision.go:172] copyRemoteCerts
	I0128 10:49:38.712357   29018 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0128 10:49:38.712454   29018 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-901000
	I0128 10:49:38.770096   29018 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58333 SSHKeyPath:/Users/jenkins/minikube-integration/15565-24808/.minikube/machines/ingress-addon-legacy-901000/id_rsa Username:docker}
	I0128 10:49:38.863705   29018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-24808/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0128 10:49:38.863803   29018 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-24808/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0128 10:49:38.881373   29018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-24808/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0128 10:49:38.881435   29018 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-24808/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I0128 10:49:38.898839   29018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-24808/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0128 10:49:38.898926   29018 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-24808/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0128 10:49:38.916363   29018 provision.go:86] duration metric: configureAuth took 336.017732ms
	I0128 10:49:38.916376   29018 ubuntu.go:193] setting minikube options for container-runtime
	I0128 10:49:38.916532   29018 config.go:180] Loaded profile config "ingress-addon-legacy-901000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0128 10:49:38.916589   29018 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-901000
	I0128 10:49:38.973431   29018 main.go:141] libmachine: Using SSH client type: native
	I0128 10:49:38.973626   29018 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 58333 <nil> <nil>}
	I0128 10:49:38.973643   29018 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0128 10:49:39.106848   29018 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0128 10:49:39.106865   29018 ubuntu.go:71] root file system type: overlay
	I0128 10:49:39.107034   29018 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0128 10:49:39.107137   29018 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-901000
	I0128 10:49:39.164524   29018 main.go:141] libmachine: Using SSH client type: native
	I0128 10:49:39.164673   29018 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 58333 <nil> <nil>}
	I0128 10:49:39.164722   29018 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0128 10:49:39.308010   29018 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0128 10:49:39.308097   29018 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-901000
	I0128 10:49:39.364810   29018 main.go:141] libmachine: Using SSH client type: native
	I0128 10:49:39.364960   29018 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 58333 <nil> <nil>}
	I0128 10:49:39.364973   29018 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0128 10:49:40.022310   29018 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2023-01-19 17:34:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-01-28 18:49:39.306038921 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0128 10:49:40.022333   29018 machine.go:91] provisioned docker machine in 1.832872688s
	I0128 10:49:40.022339   29018 client.go:171] LocalClient.Create took 10.206281799s
	I0128 10:49:40.022355   29018 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-901000" took 10.206331594s
	I0128 10:49:40.022364   29018 start.go:300] post-start starting for "ingress-addon-legacy-901000" (driver="docker")
	I0128 10:49:40.022372   29018 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0128 10:49:40.022438   29018 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0128 10:49:40.022499   29018 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-901000
	I0128 10:49:40.083386   29018 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58333 SSHKeyPath:/Users/jenkins/minikube-integration/15565-24808/.minikube/machines/ingress-addon-legacy-901000/id_rsa Username:docker}
	I0128 10:49:40.181091   29018 ssh_runner.go:195] Run: cat /etc/os-release
	I0128 10:49:40.184756   29018 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0128 10:49:40.184770   29018 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0128 10:49:40.184777   29018 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0128 10:49:40.184782   29018 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0128 10:49:40.184793   29018 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15565-24808/.minikube/addons for local assets ...
	I0128 10:49:40.184896   29018 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15565-24808/.minikube/files for local assets ...
	I0128 10:49:40.185066   29018 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15565-24808/.minikube/files/etc/ssl/certs/259822.pem -> 259822.pem in /etc/ssl/certs
	I0128 10:49:40.185073   29018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-24808/.minikube/files/etc/ssl/certs/259822.pem -> /etc/ssl/certs/259822.pem
	I0128 10:49:40.185272   29018 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0128 10:49:40.193203   29018 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-24808/.minikube/files/etc/ssl/certs/259822.pem --> /etc/ssl/certs/259822.pem (1708 bytes)
	I0128 10:49:40.212971   29018 start.go:303] post-start completed in 190.593637ms
	I0128 10:49:40.213591   29018 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-901000
	I0128 10:49:40.270871   29018 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/ingress-addon-legacy-901000/config.json ...
	I0128 10:49:40.271281   29018 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0128 10:49:40.271337   29018 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-901000
	I0128 10:49:40.330661   29018 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58333 SSHKeyPath:/Users/jenkins/minikube-integration/15565-24808/.minikube/machines/ingress-addon-legacy-901000/id_rsa Username:docker}
	I0128 10:49:40.423505   29018 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0128 10:49:40.428016   29018 start.go:128] duration metric: createHost completed in 10.634527986s
	I0128 10:49:40.428030   29018 start.go:83] releasing machines lock for "ingress-addon-legacy-901000", held for 10.634613374s
	I0128 10:49:40.428108   29018 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-901000
	I0128 10:49:40.486751   29018 ssh_runner.go:195] Run: cat /version.json
	I0128 10:49:40.486797   29018 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0128 10:49:40.486838   29018 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-901000
	I0128 10:49:40.486872   29018 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-901000
	I0128 10:49:40.549144   29018 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58333 SSHKeyPath:/Users/jenkins/minikube-integration/15565-24808/.minikube/machines/ingress-addon-legacy-901000/id_rsa Username:docker}
	I0128 10:49:40.549584   29018 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58333 SSHKeyPath:/Users/jenkins/minikube-integration/15565-24808/.minikube/machines/ingress-addon-legacy-901000/id_rsa Username:docker}
	W0128 10:49:40.835074   29018 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.29.0 -> Actual minikube version: v1.29.0-1674856271-15565
	! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.29.0 -> Actual minikube version: v1.29.0-1674856271-15565
	I0128 10:49:40.835147   29018 ssh_runner.go:195] Run: systemctl --version
	I0128 10:49:40.840167   29018 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0128 10:49:40.845233   29018 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0128 10:49:40.866101   29018 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0128 10:49:40.866181   29018 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0128 10:49:40.880368   29018 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0128 10:49:40.888163   29018 cni.go:307] configured [/etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0128 10:49:40.888179   29018 start.go:483] detecting cgroup driver to use...
	I0128 10:49:40.888195   29018 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0128 10:49:40.888310   29018 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0128 10:49:40.902736   29018 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "k8s.gcr.io/pause:3.2"|' /etc/containerd/config.toml"
	I0128 10:49:40.911860   29018 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0128 10:49:40.920530   29018 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0128 10:49:40.920588   29018 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0128 10:49:40.929536   29018 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0128 10:49:40.938361   29018 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0128 10:49:40.948060   29018 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0128 10:49:40.957374   29018 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0128 10:49:40.966440   29018 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0128 10:49:40.974984   29018 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0128 10:49:40.982716   29018 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0128 10:49:40.990119   29018 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0128 10:49:41.062430   29018 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0128 10:49:41.136806   29018 start.go:483] detecting cgroup driver to use...
	I0128 10:49:41.136855   29018 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0128 10:49:41.136922   29018 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0128 10:49:41.148072   29018 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0128 10:49:41.148152   29018 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0128 10:49:41.159988   29018 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0128 10:49:41.175394   29018 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0128 10:49:41.252206   29018 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0128 10:49:41.344380   29018 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0128 10:49:41.344396   29018 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0128 10:49:41.383741   29018 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0128 10:49:41.455707   29018 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0128 10:49:41.674234   29018 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0128 10:49:41.705434   29018 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0128 10:49:41.780166   29018 out.go:204] * Preparing Kubernetes v1.18.20 on Docker 20.10.23 ...
	I0128 10:49:41.780368   29018 cli_runner.go:164] Run: docker exec -t ingress-addon-legacy-901000 dig +short host.docker.internal
	I0128 10:49:41.894654   29018 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0128 10:49:41.894762   29018 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0128 10:49:41.899204   29018 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0128 10:49:41.909766   29018 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" ingress-addon-legacy-901000
	I0128 10:49:41.968796   29018 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0128 10:49:41.968877   29018 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0128 10:49:41.993531   29018 docker.go:630] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I0128 10:49:41.993548   29018 docker.go:560] Images already preloaded, skipping extraction
	I0128 10:49:41.993626   29018 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0128 10:49:42.018786   29018 docker.go:630] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I0128 10:49:42.018806   29018 cache_images.go:84] Images are preloaded, skipping loading
	I0128 10:49:42.018899   29018 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0128 10:49:42.091204   29018 cni.go:84] Creating CNI manager for ""
	I0128 10:49:42.091220   29018 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0128 10:49:42.091234   29018 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0128 10:49:42.091249   29018 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-901000 NodeName:ingress-addon-legacy-901000 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0128 10:49:42.091391   29018 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "ingress-addon-legacy-901000"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0128 10:49:42.091478   29018 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=ingress-addon-legacy-901000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-901000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0128 10:49:42.091553   29018 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I0128 10:49:42.099754   29018 binaries.go:44] Found k8s binaries, skipping transfer
	I0128 10:49:42.099857   29018 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0128 10:49:42.107973   29018 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (354 bytes)
	I0128 10:49:42.121489   29018 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I0128 10:49:42.134738   29018 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2124 bytes)
	I0128 10:49:42.148167   29018 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0128 10:49:42.152903   29018 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0128 10:49:42.163179   29018 certs.go:56] Setting up /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/ingress-addon-legacy-901000 for IP: 192.168.49.2
	I0128 10:49:42.163203   29018 certs.go:186] acquiring lock for shared ca certs: {Name:mk223e4eab41546e140aa3e3e480564c04fddab4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0128 10:49:42.163412   29018 certs.go:195] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15565-24808/.minikube/ca.key
	I0128 10:49:42.163492   29018 certs.go:195] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15565-24808/.minikube/proxy-client-ca.key
	I0128 10:49:42.163539   29018 certs.go:315] generating minikube-user signed cert: /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/ingress-addon-legacy-901000/client.key
	I0128 10:49:42.163551   29018 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/ingress-addon-legacy-901000/client.crt with IP's: []
	I0128 10:49:42.368896   29018 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/ingress-addon-legacy-901000/client.crt ...
	I0128 10:49:42.368914   29018 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/ingress-addon-legacy-901000/client.crt: {Name:mkeb2db4ec92000ef6a94e735861b4539ec2b578 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0128 10:49:42.369236   29018 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/ingress-addon-legacy-901000/client.key ...
	I0128 10:49:42.369244   29018 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/ingress-addon-legacy-901000/client.key: {Name:mkeb20beab419c6e3e88e5e52b7b0c7186c1104f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0128 10:49:42.369471   29018 certs.go:315] generating minikube signed cert: /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/ingress-addon-legacy-901000/apiserver.key.dd3b5fb2
	I0128 10:49:42.369487   29018 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/ingress-addon-legacy-901000/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0128 10:49:42.628910   29018 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/ingress-addon-legacy-901000/apiserver.crt.dd3b5fb2 ...
	I0128 10:49:42.628924   29018 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/ingress-addon-legacy-901000/apiserver.crt.dd3b5fb2: {Name:mk09f6508b0cab811b596b3eec6c63dc831f3e9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0128 10:49:42.629180   29018 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/ingress-addon-legacy-901000/apiserver.key.dd3b5fb2 ...
	I0128 10:49:42.629188   29018 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/ingress-addon-legacy-901000/apiserver.key.dd3b5fb2: {Name:mk57ee3382505064de5f3c8e2a952876dcdf9401 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0128 10:49:42.629364   29018 certs.go:333] copying /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/ingress-addon-legacy-901000/apiserver.crt.dd3b5fb2 -> /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/ingress-addon-legacy-901000/apiserver.crt
	I0128 10:49:42.629517   29018 certs.go:337] copying /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/ingress-addon-legacy-901000/apiserver.key.dd3b5fb2 -> /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/ingress-addon-legacy-901000/apiserver.key
	I0128 10:49:42.629663   29018 certs.go:315] generating aggregator signed cert: /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/ingress-addon-legacy-901000/proxy-client.key
	I0128 10:49:42.629677   29018 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/ingress-addon-legacy-901000/proxy-client.crt with IP's: []
	I0128 10:49:42.699977   29018 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/ingress-addon-legacy-901000/proxy-client.crt ...
	I0128 10:49:42.699994   29018 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/ingress-addon-legacy-901000/proxy-client.crt: {Name:mk83be7606e968ad4216d423a3cf6fc3bb28018e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0128 10:49:42.700251   29018 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/ingress-addon-legacy-901000/proxy-client.key ...
	I0128 10:49:42.700260   29018 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/ingress-addon-legacy-901000/proxy-client.key: {Name:mkf74003115bd131574f3ca0b427cf01c9611766 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0128 10:49:42.700457   29018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/ingress-addon-legacy-901000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0128 10:49:42.700491   29018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/ingress-addon-legacy-901000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0128 10:49:42.700513   29018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/ingress-addon-legacy-901000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0128 10:49:42.700537   29018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/ingress-addon-legacy-901000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0128 10:49:42.700559   29018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-24808/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0128 10:49:42.700589   29018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-24808/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0128 10:49:42.700611   29018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-24808/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0128 10:49:42.700654   29018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-24808/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0128 10:49:42.700756   29018 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-24808/.minikube/certs/Users/jenkins/minikube-integration/15565-24808/.minikube/certs/25982.pem (1338 bytes)
	W0128 10:49:42.700808   29018 certs.go:397] ignoring /Users/jenkins/minikube-integration/15565-24808/.minikube/certs/Users/jenkins/minikube-integration/15565-24808/.minikube/certs/25982_empty.pem, impossibly tiny 0 bytes
	I0128 10:49:42.700824   29018 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-24808/.minikube/certs/Users/jenkins/minikube-integration/15565-24808/.minikube/certs/ca-key.pem (1675 bytes)
	I0128 10:49:42.700858   29018 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-24808/.minikube/certs/Users/jenkins/minikube-integration/15565-24808/.minikube/certs/ca.pem (1082 bytes)
	I0128 10:49:42.700891   29018 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-24808/.minikube/certs/Users/jenkins/minikube-integration/15565-24808/.minikube/certs/cert.pem (1123 bytes)
	I0128 10:49:42.700930   29018 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-24808/.minikube/certs/Users/jenkins/minikube-integration/15565-24808/.minikube/certs/key.pem (1675 bytes)
	I0128 10:49:42.701010   29018 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-24808/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15565-24808/.minikube/files/etc/ssl/certs/259822.pem (1708 bytes)
	I0128 10:49:42.701042   29018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-24808/.minikube/files/etc/ssl/certs/259822.pem -> /usr/share/ca-certificates/259822.pem
	I0128 10:49:42.701063   29018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-24808/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0128 10:49:42.701083   29018 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15565-24808/.minikube/certs/25982.pem -> /usr/share/ca-certificates/25982.pem
	I0128 10:49:42.701589   29018 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/ingress-addon-legacy-901000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0128 10:49:42.720278   29018 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/ingress-addon-legacy-901000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0128 10:49:42.737649   29018 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/ingress-addon-legacy-901000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0128 10:49:42.755020   29018 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/ingress-addon-legacy-901000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0128 10:49:42.772340   29018 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-24808/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0128 10:49:42.790130   29018 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-24808/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0128 10:49:42.807914   29018 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-24808/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0128 10:49:42.825554   29018 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-24808/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0128 10:49:42.842759   29018 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-24808/.minikube/files/etc/ssl/certs/259822.pem --> /usr/share/ca-certificates/259822.pem (1708 bytes)
	I0128 10:49:42.860263   29018 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-24808/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0128 10:49:42.877535   29018 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-24808/.minikube/certs/25982.pem --> /usr/share/ca-certificates/25982.pem (1338 bytes)
	I0128 10:49:42.894720   29018 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (772 bytes)
	I0128 10:49:42.907584   29018 ssh_runner.go:195] Run: openssl version
	I0128 10:49:42.913299   29018 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/259822.pem && ln -fs /usr/share/ca-certificates/259822.pem /etc/ssl/certs/259822.pem"
	I0128 10:49:42.921376   29018 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/259822.pem
	I0128 10:49:42.925311   29018 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Jan 28 18:44 /usr/share/ca-certificates/259822.pem
	I0128 10:49:42.925363   29018 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/259822.pem
	I0128 10:49:42.930907   29018 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/259822.pem /etc/ssl/certs/3ec20f2e.0"
	I0128 10:49:42.939094   29018 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0128 10:49:42.947240   29018 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0128 10:49:42.951455   29018 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Jan 28 18:40 /usr/share/ca-certificates/minikubeCA.pem
	I0128 10:49:42.951517   29018 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0128 10:49:42.957252   29018 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0128 10:49:42.965309   29018 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/25982.pem && ln -fs /usr/share/ca-certificates/25982.pem /etc/ssl/certs/25982.pem"
	I0128 10:49:42.973600   29018 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/25982.pem
	I0128 10:49:42.977623   29018 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Jan 28 18:44 /usr/share/ca-certificates/25982.pem
	I0128 10:49:42.977672   29018 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/25982.pem
	I0128 10:49:42.983156   29018 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/25982.pem /etc/ssl/certs/51391683.0"
	I0128 10:49:42.991176   29018 kubeadm.go:401] StartCluster: {Name:ingress-addon-legacy-901000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-901000 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0128 10:49:42.991271   29018 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0128 10:49:43.013913   29018 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0128 10:49:43.021790   29018 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0128 10:49:43.029215   29018 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0128 10:49:43.029286   29018 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0128 10:49:43.036979   29018 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0128 10:49:43.037008   29018 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0128 10:49:43.085385   29018 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0128 10:49:43.085451   29018 kubeadm.go:322] [preflight] Running pre-flight checks
	I0128 10:49:43.385684   29018 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0128 10:49:43.385784   29018 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0128 10:49:43.385881   29018 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0128 10:49:43.609103   29018 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0128 10:49:43.609658   29018 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0128 10:49:43.609713   29018 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0128 10:49:43.681935   29018 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0128 10:49:43.715898   29018 out.go:204]   - Generating certificates and keys ...
	I0128 10:49:43.715989   29018 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0128 10:49:43.716073   29018 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0128 10:49:44.065791   29018 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0128 10:49:44.171757   29018 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0128 10:49:44.390222   29018 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0128 10:49:44.586270   29018 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0128 10:49:44.630782   29018 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0128 10:49:44.630907   29018 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-901000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0128 10:49:44.744979   29018 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0128 10:49:44.745085   29018 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-901000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0128 10:49:44.994487   29018 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0128 10:49:45.327034   29018 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0128 10:49:45.550259   29018 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0128 10:49:45.550389   29018 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0128 10:49:45.839478   29018 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0128 10:49:46.003290   29018 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0128 10:49:46.112082   29018 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0128 10:49:46.149588   29018 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0128 10:49:46.150116   29018 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0128 10:49:46.171760   29018 out.go:204]   - Booting up control plane ...
	I0128 10:49:46.171874   29018 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0128 10:49:46.172007   29018 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0128 10:49:46.172104   29018 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0128 10:49:46.172210   29018 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0128 10:49:46.172421   29018 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0128 10:50:26.159226   29018 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0128 10:50:26.159607   29018 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0128 10:50:26.159777   29018 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0128 10:50:31.162079   29018 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0128 10:50:31.162322   29018 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0128 10:50:41.162361   29018 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0128 10:50:41.162570   29018 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0128 10:51:01.195135   29018 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0128 10:51:01.195347   29018 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0128 10:51:41.213175   29018 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0128 10:51:41.213337   29018 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0128 10:51:41.213348   29018 kubeadm.go:322] 
	I0128 10:51:41.213387   29018 kubeadm.go:322] 	Unfortunately, an error has occurred:
	I0128 10:51:41.213423   29018 kubeadm.go:322] 		timed out waiting for the condition
	I0128 10:51:41.213428   29018 kubeadm.go:322] 
	I0128 10:51:41.213460   29018 kubeadm.go:322] 	This error is likely caused by:
	I0128 10:51:41.213489   29018 kubeadm.go:322] 		- The kubelet is not running
	I0128 10:51:41.213577   29018 kubeadm.go:322] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0128 10:51:41.213582   29018 kubeadm.go:322] 
	I0128 10:51:41.213706   29018 kubeadm.go:322] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0128 10:51:41.213739   29018 kubeadm.go:322] 		- 'systemctl status kubelet'
	I0128 10:51:41.213770   29018 kubeadm.go:322] 		- 'journalctl -xeu kubelet'
	I0128 10:51:41.213785   29018 kubeadm.go:322] 
	I0128 10:51:41.213892   29018 kubeadm.go:322] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0128 10:51:41.213967   29018 kubeadm.go:322] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0128 10:51:41.213977   29018 kubeadm.go:322] 
	I0128 10:51:41.214036   29018 kubeadm.go:322] 	Here is one example how you may list all Kubernetes containers running in docker:
	I0128 10:51:41.214070   29018 kubeadm.go:322] 		- 'docker ps -a | grep kube | grep -v pause'
	I0128 10:51:41.214139   29018 kubeadm.go:322] 		Once you have found the failing container, you can inspect its logs with:
	I0128 10:51:41.214184   29018 kubeadm.go:322] 		- 'docker logs CONTAINERID'
	I0128 10:51:41.214195   29018 kubeadm.go:322] 
	I0128 10:51:41.217029   29018 kubeadm.go:322] W0128 18:49:43.084618    1160 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0128 10:51:41.217177   29018 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0128 10:51:41.217246   29018 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0128 10:51:41.217435   29018 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.23. Latest validated version: 19.03
	I0128 10:51:41.217585   29018 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0128 10:51:41.217709   29018 kubeadm.go:322] W0128 18:49:46.155206    1160 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0128 10:51:41.217816   29018 kubeadm.go:322] W0128 18:49:46.155951    1160 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0128 10:51:41.217879   29018 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0128 10:51:41.217939   29018 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	W0128 10:51:41.218141   29018 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-901000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-901000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0128 18:49:43.084618    1160 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.23. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0128 18:49:46.155206    1160 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0128 18:49:46.155951    1160 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-901000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-901000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0128 18:49:43.084618    1160 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.23. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0128 18:49:46.155206    1160 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0128 18:49:46.155951    1160 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0128 10:51:41.218178   29018 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0128 10:51:41.634091   29018 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0128 10:51:41.643829   29018 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0128 10:51:41.643887   29018 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0128 10:51:41.651313   29018 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0128 10:51:41.651335   29018 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0128 10:51:41.699726   29018 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0128 10:51:41.699761   29018 kubeadm.go:322] [preflight] Running pre-flight checks
	I0128 10:51:41.996371   29018 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0128 10:51:41.996450   29018 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0128 10:51:41.996525   29018 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0128 10:51:42.220100   29018 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0128 10:51:42.220186   29018 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0128 10:51:42.220226   29018 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0128 10:51:42.298668   29018 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0128 10:51:42.320110   29018 out.go:204]   - Generating certificates and keys ...
	I0128 10:51:42.320190   29018 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0128 10:51:42.320289   29018 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0128 10:51:42.320376   29018 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0128 10:51:42.320438   29018 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0128 10:51:42.320502   29018 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0128 10:51:42.320568   29018 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0128 10:51:42.320647   29018 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0128 10:51:42.320706   29018 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0128 10:51:42.320768   29018 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0128 10:51:42.320856   29018 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0128 10:51:42.320893   29018 kubeadm.go:322] [certs] Using the existing "sa" key
	I0128 10:51:42.320940   29018 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0128 10:51:42.380484   29018 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0128 10:51:42.819662   29018 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0128 10:51:42.883191   29018 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0128 10:51:42.986895   29018 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0128 10:51:42.987457   29018 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0128 10:51:43.009080   29018 out.go:204]   - Booting up control plane ...
	I0128 10:51:43.009244   29018 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0128 10:51:43.009387   29018 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0128 10:51:43.009520   29018 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0128 10:51:43.009659   29018 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0128 10:51:43.009916   29018 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0128 10:52:22.998172   29018 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0128 10:52:22.998789   29018 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0128 10:52:22.998991   29018 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0128 10:52:28.001093   29018 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0128 10:52:28.001327   29018 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0128 10:52:38.003316   29018 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0128 10:52:38.003563   29018 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0128 10:52:58.005166   29018 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0128 10:52:58.005386   29018 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0128 10:53:38.007516   29018 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0128 10:53:38.007766   29018 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0128 10:53:38.007779   29018 kubeadm.go:322] 
	I0128 10:53:38.007825   29018 kubeadm.go:322] 	Unfortunately, an error has occurred:
	I0128 10:53:38.007891   29018 kubeadm.go:322] 		timed out waiting for the condition
	I0128 10:53:38.007907   29018 kubeadm.go:322] 
	I0128 10:53:38.007947   29018 kubeadm.go:322] 	This error is likely caused by:
	I0128 10:53:38.007996   29018 kubeadm.go:322] 		- The kubelet is not running
	I0128 10:53:38.008125   29018 kubeadm.go:322] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0128 10:53:38.008145   29018 kubeadm.go:322] 
	I0128 10:53:38.008277   29018 kubeadm.go:322] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0128 10:53:38.008317   29018 kubeadm.go:322] 		- 'systemctl status kubelet'
	I0128 10:53:38.008347   29018 kubeadm.go:322] 		- 'journalctl -xeu kubelet'
	I0128 10:53:38.008357   29018 kubeadm.go:322] 
	I0128 10:53:38.008471   29018 kubeadm.go:322] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0128 10:53:38.008559   29018 kubeadm.go:322] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0128 10:53:38.008571   29018 kubeadm.go:322] 
	I0128 10:53:38.008691   29018 kubeadm.go:322] 	Here is one example how you may list all Kubernetes containers running in docker:
	I0128 10:53:38.008763   29018 kubeadm.go:322] 		- 'docker ps -a | grep kube | grep -v pause'
	I0128 10:53:38.008858   29018 kubeadm.go:322] 		Once you have found the failing container, you can inspect its logs with:
	I0128 10:53:38.008905   29018 kubeadm.go:322] 		- 'docker logs CONTAINERID'
	I0128 10:53:38.008912   29018 kubeadm.go:322] 
	I0128 10:53:38.011563   29018 kubeadm.go:322] W0128 18:51:41.695747    3667 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0128 10:53:38.011758   29018 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0128 10:53:38.011841   29018 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0128 10:53:38.011951   29018 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.23. Latest validated version: 19.03
	I0128 10:53:38.012037   29018 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0128 10:53:38.012137   29018 kubeadm.go:322] W0128 18:51:42.988539    3667 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0128 10:53:38.012232   29018 kubeadm.go:322] W0128 18:51:42.989220    3667 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0128 10:53:38.012297   29018 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0128 10:53:38.012363   29018 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0128 10:53:38.012389   29018 kubeadm.go:403] StartCluster complete in 3m54.96980984s
	I0128 10:53:38.012480   29018 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0128 10:53:38.035527   29018 logs.go:279] 0 containers: []
	W0128 10:53:38.035540   29018 logs.go:281] No container was found matching "kube-apiserver"
	I0128 10:53:38.035615   29018 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0128 10:53:38.059739   29018 logs.go:279] 0 containers: []
	W0128 10:53:38.059752   29018 logs.go:281] No container was found matching "etcd"
	I0128 10:53:38.059820   29018 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0128 10:53:38.083644   29018 logs.go:279] 0 containers: []
	W0128 10:53:38.083660   29018 logs.go:281] No container was found matching "coredns"
	I0128 10:53:38.083733   29018 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0128 10:53:38.106626   29018 logs.go:279] 0 containers: []
	W0128 10:53:38.106642   29018 logs.go:281] No container was found matching "kube-scheduler"
	I0128 10:53:38.106711   29018 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0128 10:53:38.129768   29018 logs.go:279] 0 containers: []
	W0128 10:53:38.129781   29018 logs.go:281] No container was found matching "kube-proxy"
	I0128 10:53:38.129850   29018 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0128 10:53:38.153077   29018 logs.go:279] 0 containers: []
	W0128 10:53:38.153090   29018 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0128 10:53:38.153162   29018 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0128 10:53:38.176772   29018 logs.go:279] 0 containers: []
	W0128 10:53:38.176785   29018 logs.go:281] No container was found matching "storage-provisioner"
	I0128 10:53:38.176862   29018 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0128 10:53:38.200276   29018 logs.go:279] 0 containers: []
	W0128 10:53:38.200290   29018 logs.go:281] No container was found matching "kube-controller-manager"
	I0128 10:53:38.200298   29018 logs.go:124] Gathering logs for kubelet ...
	I0128 10:53:38.200307   29018 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0128 10:53:38.239117   29018 logs.go:124] Gathering logs for dmesg ...
	I0128 10:53:38.239135   29018 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0128 10:53:38.253406   29018 logs.go:124] Gathering logs for describe nodes ...
	I0128 10:53:38.253421   29018 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0128 10:53:38.312228   29018 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0128 10:53:38.312240   29018 logs.go:124] Gathering logs for Docker ...
	I0128 10:53:38.312247   29018 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0128 10:53:38.329315   29018 logs.go:124] Gathering logs for container status ...
	I0128 10:53:38.329329   29018 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0128 10:53:40.379078   29018 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.049704712s)
	W0128 10:53:40.379204   29018 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0128 18:51:41.695747    3667 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.23. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0128 18:51:42.988539    3667 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0128 18:51:42.989220    3667 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0128 10:53:40.379227   29018 out.go:239] * 
	* 
	W0128 10:53:40.379361   29018 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0128 18:51:41.695747    3667 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.23. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0128 18:51:42.988539    3667 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0128 18:51:42.989220    3667 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0128 18:51:41.695747    3667 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.23. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0128 18:51:42.988539    3667 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0128 18:51:42.989220    3667 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0128 10:53:40.379377   29018 out.go:239] * 
	* 
	W0128 10:53:40.380045   29018 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0128 10:53:40.444664   29018 out.go:177] 
	W0128 10:53:40.486519   29018 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0128 18:51:41.695747    3667 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.23. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0128 18:51:42.988539    3667 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0128 18:51:42.989220    3667 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0128 18:51:41.695747    3667 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.23. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0128 18:51:42.988539    3667 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0128 18:51:42.989220    3667 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0128 10:53:40.486608   29018 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0128 10:53:40.486645   29018 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0128 10:53:40.507594   29018 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:41: failed to start minikube with args: "out/minikube-darwin-amd64 start -p ingress-addon-legacy-901000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker " : exit status 109
--- FAIL: TestIngressAddonLegacy/StartLegacyK8sCluster (256.94s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (89.61s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-901000 addons enable ingress --alsologtostderr -v=5
E0128 10:54:51.198403   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/functional-251000/client.crt: no such file or directory
ingress_addon_legacy_test.go:70: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ingress-addon-legacy-901000 addons enable ingress --alsologtostderr -v=5: exit status 10 (1m29.149005815s)

                                                
                                                
-- stdout --
	* ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	* After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	  - Using image k8s.gcr.io/ingress-nginx/controller:v0.49.3
	  - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	  - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	* Verifying ingress addon...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0128 10:53:40.662169   29398 out.go:296] Setting OutFile to fd 1 ...
	I0128 10:53:40.662408   29398 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0128 10:53:40.662414   29398 out.go:309] Setting ErrFile to fd 2...
	I0128 10:53:40.662418   29398 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0128 10:53:40.662527   29398 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15565-24808/.minikube/bin
	I0128 10:53:40.684597   29398 out.go:177] * ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	I0128 10:53:40.707795   29398 config.go:180] Loaded profile config "ingress-addon-legacy-901000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0128 10:53:40.707826   29398 addons.go:65] Setting ingress=true in profile "ingress-addon-legacy-901000"
	I0128 10:53:40.707842   29398 addons.go:227] Setting addon ingress=true in "ingress-addon-legacy-901000"
	I0128 10:53:40.708358   29398 host.go:66] Checking if "ingress-addon-legacy-901000" exists ...
	I0128 10:53:40.709321   29398 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-901000 --format={{.State.Status}}
	I0128 10:53:40.788430   29398 out.go:177] * After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	I0128 10:53:40.811301   29398 out.go:177]   - Using image k8s.gcr.io/ingress-nginx/controller:v0.49.3
	I0128 10:53:40.831988   29398 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I0128 10:53:40.853126   29398 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I0128 10:53:40.874229   29398 addons.go:419] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0128 10:53:40.874250   29398 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (15613 bytes)
	I0128 10:53:40.874335   29398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-901000
	I0128 10:53:40.930582   29398 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58333 SSHKeyPath:/Users/jenkins/minikube-integration/15565-24808/.minikube/machines/ingress-addon-legacy-901000/id_rsa Username:docker}
	I0128 10:53:41.030533   29398 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0128 10:53:41.082409   29398 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0128 10:53:41.082430   29398 retry.go:31] will retry after 276.165072ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0128 10:53:41.358857   29398 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0128 10:53:41.412966   29398 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0128 10:53:41.412981   29398 retry.go:31] will retry after 540.190908ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0128 10:53:41.953810   29398 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0128 10:53:42.008129   29398 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0128 10:53:42.008149   29398 retry.go:31] will retry after 655.06503ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0128 10:53:42.665495   29398 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0128 10:53:42.719524   29398 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0128 10:53:42.719540   29398 retry.go:31] will retry after 791.196345ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0128 10:53:43.510896   29398 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0128 10:53:43.564188   29398 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0128 10:53:43.564203   29398 retry.go:31] will retry after 1.170244332s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0128 10:53:44.734768   29398 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0128 10:53:44.791374   29398 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0128 10:53:44.791390   29398 retry.go:31] will retry after 2.253109428s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0128 10:53:47.046870   29398 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0128 10:53:47.102725   29398 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0128 10:53:47.102740   29398 retry.go:31] will retry after 1.610739793s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0128 10:53:48.713642   29398 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0128 10:53:48.767931   29398 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0128 10:53:48.767954   29398 retry.go:31] will retry after 2.804311738s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0128 10:53:51.573330   29398 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0128 10:53:51.628943   29398 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0128 10:53:51.628958   29398 retry.go:31] will retry after 3.824918958s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0128 10:53:55.454225   29398 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0128 10:53:55.507479   29398 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0128 10:53:55.507495   29398 retry.go:31] will retry after 7.69743562s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0128 10:54:03.206299   29398 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0128 10:54:03.262125   29398 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0128 10:54:03.262140   29398 retry.go:31] will retry after 14.635568968s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0128 10:54:17.900198   29398 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0128 10:54:17.955399   29398 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0128 10:54:17.955416   29398 retry.go:31] will retry after 28.406662371s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0128 10:54:46.364851   29398 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0128 10:54:46.418765   29398 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0128 10:54:46.418781   29398 retry.go:31] will retry after 23.168280436s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0128 10:55:09.589781   29398 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0128 10:55:09.643991   29398 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0128 10:55:09.644027   29398 addons.go:457] Verifying addon ingress=true in "ingress-addon-legacy-901000"
	I0128 10:55:09.665757   29398 out.go:177] * Verifying ingress addon...
	I0128 10:55:09.688477   29398 out.go:177] 
	W0128 10:55:09.709503   29398 out.go:239] X Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 get kube-client to validate ingress addon: client config: context "ingress-addon-legacy-901000" does not exist: client config: context "ingress-addon-legacy-901000" does not exist]
	X Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 get kube-client to validate ingress addon: client config: context "ingress-addon-legacy-901000" does not exist: client config: context "ingress-addon-legacy-901000" does not exist]
	W0128 10:55:09.709518   29398 out.go:239] * 
	* 
	W0128 10:55:09.713095   29398 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_addons_ecab7b1157b569c129811d3c2b680fbca2a6f3d2_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_addons_ecab7b1157b569c129811d3c2b680fbca2a6f3d2_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0128 10:55:09.734403   29398 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:71: failed to enable ingress addon: exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddonActivation]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-901000
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-901000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5153d3ba400f800f2b32a2dc2fa0f990238f716862aacce064069db143ece5ba",
	        "Created": "2023-01-28T18:49:37.165157201Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 439741,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-01-28T18:49:37.461845975Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:01c0ce65fff70ab1f019aa14679c46b23331bd108ae899438e589673efaa9c00",
	        "ResolvConfPath": "/var/lib/docker/containers/5153d3ba400f800f2b32a2dc2fa0f990238f716862aacce064069db143ece5ba/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5153d3ba400f800f2b32a2dc2fa0f990238f716862aacce064069db143ece5ba/hostname",
	        "HostsPath": "/var/lib/docker/containers/5153d3ba400f800f2b32a2dc2fa0f990238f716862aacce064069db143ece5ba/hosts",
	        "LogPath": "/var/lib/docker/containers/5153d3ba400f800f2b32a2dc2fa0f990238f716862aacce064069db143ece5ba/5153d3ba400f800f2b32a2dc2fa0f990238f716862aacce064069db143ece5ba-json.log",
	        "Name": "/ingress-addon-legacy-901000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-901000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-901000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/d3ccbf6656724e752c8ac68c6ef2a0ff497ae52813a93b822b387b81475b9d41-init/diff:/var/lib/docker/overlay2/ebc03c916d1215717cc928cc2ae6bb5febcaf1787682b19b31688cb58ea354df/diff:/var/lib/docker/overlay2/aaa47387c6297b9482eaf2d8291628b9713643f21d066c37435b7e2cb9493e2a/diff:/var/lib/docker/overlay2/f4b2c82f60338b3f859441322400906b78ab112321f53e01c52ec81f29b4b492/diff:/var/lib/docker/overlay2/9425b655d46ca09e43b6484556a0c42b69e0c7947e14ec530546a61f36d3b950/diff:/var/lib/docker/overlay2/7d54571f62200ad4404fb9bb52649136f53eb6d6eedc5a51b22898df9001c1d4/diff:/var/lib/docker/overlay2/a4b4864baac235070d93e0940d897dd3006e6a93d705490108451f8d00ba148f/diff:/var/lib/docker/overlay2/8b092a30ffaf1c9230cef4864afb85d91ceb9fa92e484e3ebf7a31bb7df915bc/diff:/var/lib/docker/overlay2/96ac23e2e494a92e2287115c1a85e160e67543832baaaa3fa9a2351b370d5bd4/diff:/var/lib/docker/overlay2/c1e68f2d6c4ce95b33833a8d750a79aeaef16cc7d0a556369a63014eef7597b6/diff:/var/lib/docker/overlay2/89b3fe
fdd4bd8243826ccca31dec1aef9f91ad82adda108147b89c096792dfa5/diff:/var/lib/docker/overlay2/0b09be009751a25e4cbe64835151f1a814c4547d2c513994ae82f8093a22040d/diff:/var/lib/docker/overlay2/dc9a2b1667d67c8f0269966ef8862a4ffcfe4b68ad45f12e3ff27075c595c716/diff:/var/lib/docker/overlay2/d41ab03c6154f92111515bffc37c1d75570fa697ffa380631216096b52bfbc1b/diff:/var/lib/docker/overlay2/549b2cfc0a7d4f81f8d2624b1b2069b66d159ecd7b38148b476bb7a1b9e29100/diff:/var/lib/docker/overlay2/ecd7a1e2ce66c77afcf87a94383f14763eca5c8732c76b1b83765a278db91228/diff:/var/lib/docker/overlay2/6361f06734d312adc4271443765c435c4a7600356d1c6597fb7fa440cf1a2eb4/diff:/var/lib/docker/overlay2/cc7751a853d09ad130dccc1c835daa64e6ba830331636aca6a2a98da95ab52c1/diff:/var/lib/docker/overlay2/6612588f68e64e123a6e5cf6f6da339ee6072f8054f936be6d4f799d6c683e75/diff:/var/lib/docker/overlay2/673e42d3b5998d60bbb5c7c40da29902c3ea35068701966a7e3fd8a923d4a37a/diff:/var/lib/docker/overlay2/115d8a9e167d9b574c1d945d85d46da3ad2688595502524702976fc9b1051464/diff:/var/lib/d
ocker/overlay2/a8a2380c37eec6348eac27c7ee660b1f1d1ef94786cd68f197218066d99d80dd/diff:/var/lib/docker/overlay2/9261c5669bb687df6f9ad1ac00615cdf03b913ab9b3e1ca1a1f1cb6420702325/diff:/var/lib/docker/overlay2/46213bfa914da7941cec1c2c32185400a83c35a74274f39d74ad203ee5688535/diff:/var/lib/docker/overlay2/45ce48252aa0eeb54f2a1c27e570f8e85ac4a1d28a947b81618e608c64e3a700/diff:/var/lib/docker/overlay2/5631fae0fb00254444e3cc059b8b6062ee02fd66eefdf043970883f6724ce682/diff:/var/lib/docker/overlay2/e23ece345ff4dee7248a8e8cbd15cdbaef319d286a6490377fc337feecd6be04/diff:/var/lib/docker/overlay2/004bedb9de21965ae003d62b64a9e6506a10afa328b9af469eb51d3920d9c3b6/diff:/var/lib/docker/overlay2/c0ed692b610507b4315c2a43e64bd682bfdae35a7b4bcba499bba9cfb33121c4/diff:/var/lib/docker/overlay2/8396057830d1ed01256a5ee803b6310c8bf4c6ef3fb0f958240557352a12f3db/diff:/var/lib/docker/overlay2/c8024a29733fe87d5aad124df5ff33e97bcca94ee9fee196a6d51c9474692733/diff:/var/lib/docker/overlay2/9e59b455e481cdabd17790daddef6872e7b6452d1e8de1526998d92ab5f
c008f/diff:/var/lib/docker/overlay2/88cc3ecb1b979acbac3227fd30f3e879629eff2b47f416b3069463900f3e40e0/diff:/var/lib/docker/overlay2/5ef1713ef4e296c4637ccd2823c2b80cb5c53cd757947ff3fc17b7dd2d2dd21c/diff:/var/lib/docker/overlay2/17a697eb9c335b2a20567e3615e2222a113542532402dc62978ff64d65860c5e/diff:/var/lib/docker/overlay2/69e01a154090c42cbf63b88c7e922d483dd2d393fbab64725f79b3ff3800c3c1/diff:/var/lib/docker/overlay2/6ed77ee7b45230567431b0cbfb9cefedfd3f3d7eecf271f20a711bbcc4fdb1b3/diff:/var/lib/docker/overlay2/3bf095c6d6fe582e91d9a9ab0dc5b4d168f93f28ec2488a88f60b63ebf1e22f7/diff:/var/lib/docker/overlay2/cfc3bbbdc2702c8d23d146885b4da1a4482e8af461b5c87426fab855f97417a0/diff:/var/lib/docker/overlay2/1c4944ff8930ced790954d78530aeaf94eeb6c7367b474bdfbad30345cc1276a/diff:/var/lib/docker/overlay2/44cf435555d16eb68c4149bc53e4ae11797c7ddb429332f3d0d36328cb16ea5f/diff:/var/lib/docker/overlay2/4a7b4287594c4da981df984cd6e3910778bfdff2b5560a03d6cdcb589790c8e5/diff:/var/lib/docker/overlay2/76c287aa1bd3a7c3636e82df1bac8ead485e55
7a0fd68fdbfc0d5655d89f7113/diff:/var/lib/docker/overlay2/a2ab65056651b30980d6df9664f682519df2c2fc604d87ddb2bb2ca25b663d5e/diff:/var/lib/docker/overlay2/3a84daa5ad43dd7c27d884672613e37b8a5bed1fa79edee0e951b2e3fa39f21f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d3ccbf6656724e752c8ac68c6ef2a0ff497ae52813a93b822b387b81475b9d41/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d3ccbf6656724e752c8ac68c6ef2a0ff497ae52813a93b822b387b81475b9d41/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d3ccbf6656724e752c8ac68c6ef2a0ff497ae52813a93b822b387b81475b9d41/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-901000",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-901000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-901000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-901000",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-901000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "18ec4a350e09436ce1a1dd8dcc6f0558e16bec1fef56a541b97fc9669413ff44",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58333"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58329"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58330"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58331"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58332"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/18ec4a350e09",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-901000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "5153d3ba400f",
	                        "ingress-addon-legacy-901000"
	                    ],
	                    "NetworkID": "75f5fa8912f9a79397f8005b10f53f0b6e94046f4747d4b6144f67e72664930f",
	                    "EndpointID": "ab06380879e1f23ecc40191323cc45f687a141c76e2fa5fcd320735101d15e31",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-901000 -n ingress-addon-legacy-901000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-901000 -n ingress-addon-legacy-901000: exit status 6 (397.175876ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0128 10:55:10.203067   29494 status.go:415] kubeconfig endpoint: extract IP: "ingress-addon-legacy-901000" does not appear in /Users/jenkins/minikube-integration/15565-24808/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-901000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (89.61s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (89.55s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-901000 addons enable ingress-dns --alsologtostderr -v=5
ingress_addon_legacy_test.go:79: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ingress-addon-legacy-901000 addons enable ingress-dns --alsologtostderr -v=5: exit status 10 (1m29.087447427s)

                                                
                                                
-- stdout --
	* ingress-dns is an addon maintained by Google. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	* After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	  - Using image cryptexlabs/minikube-ingress-dns:0.3.0
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0128 10:55:10.269425   29504 out.go:296] Setting OutFile to fd 1 ...
	I0128 10:55:10.269666   29504 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0128 10:55:10.269671   29504 out.go:309] Setting ErrFile to fd 2...
	I0128 10:55:10.269675   29504 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0128 10:55:10.269791   29504 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15565-24808/.minikube/bin
	I0128 10:55:10.291820   29504 out.go:177] * ingress-dns is an addon maintained by Google. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	I0128 10:55:10.314129   29504 config.go:180] Loaded profile config "ingress-addon-legacy-901000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0128 10:55:10.314160   29504 addons.go:65] Setting ingress-dns=true in profile "ingress-addon-legacy-901000"
	I0128 10:55:10.314175   29504 addons.go:227] Setting addon ingress-dns=true in "ingress-addon-legacy-901000"
	I0128 10:55:10.314692   29504 host.go:66] Checking if "ingress-addon-legacy-901000" exists ...
	I0128 10:55:10.315680   29504 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-901000 --format={{.State.Status}}
	I0128 10:55:10.395017   29504 out.go:177] * After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	I0128 10:55:10.417115   29504 out.go:177]   - Using image cryptexlabs/minikube-ingress-dns:0.3.0
	I0128 10:55:10.438883   29504 addons.go:419] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0128 10:55:10.438921   29504 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2434 bytes)
	I0128 10:55:10.439067   29504 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-901000
	I0128 10:55:10.497516   29504 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58333 SSHKeyPath:/Users/jenkins/minikube-integration/15565-24808/.minikube/machines/ingress-addon-legacy-901000/id_rsa Username:docker}
	I0128 10:55:10.597718   29504 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0128 10:55:10.649917   29504 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0128 10:55:10.649938   29504 retry.go:31] will retry after 276.165072ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0128 10:55:10.926658   29504 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0128 10:55:10.982231   29504 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0128 10:55:10.982250   29504 retry.go:31] will retry after 540.190908ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0128 10:55:11.524722   29504 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0128 10:55:11.579177   29504 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0128 10:55:11.579192   29504 retry.go:31] will retry after 655.06503ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0128 10:55:12.236505   29504 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0128 10:55:12.292179   29504 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0128 10:55:12.292194   29504 retry.go:31] will retry after 791.196345ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0128 10:55:13.084882   29504 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0128 10:55:13.138551   29504 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0128 10:55:13.138565   29504 retry.go:31] will retry after 1.170244332s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0128 10:55:14.308984   29504 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0128 10:55:14.361579   29504 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0128 10:55:14.361597   29504 retry.go:31] will retry after 2.253109428s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0128 10:55:16.616443   29504 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0128 10:55:16.670483   29504 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0128 10:55:16.670497   29504 retry.go:31] will retry after 1.610739793s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0128 10:55:18.281761   29504 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0128 10:55:18.335994   29504 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0128 10:55:18.336009   29504 retry.go:31] will retry after 2.804311738s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0128 10:55:21.141948   29504 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0128 10:55:21.195641   29504 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0128 10:55:21.195659   29504 retry.go:31] will retry after 3.824918958s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0128 10:55:25.020916   29504 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0128 10:55:25.075360   29504 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0128 10:55:25.075380   29504 retry.go:31] will retry after 7.69743562s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0128 10:55:32.774288   29504 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0128 10:55:32.829214   29504 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0128 10:55:32.829229   29504 retry.go:31] will retry after 14.635568968s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0128 10:55:47.465717   29504 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0128 10:55:47.521058   29504 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0128 10:55:47.521073   29504 retry.go:31] will retry after 28.406662371s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0128 10:56:15.929663   29504 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0128 10:56:15.984430   29504 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0128 10:56:15.984449   29504 retry.go:31] will retry after 23.168280436s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0128 10:56:39.155371   29504 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0128 10:56:39.208958   29504 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0128 10:56:39.230901   29504 out.go:177] 
	W0128 10:56:39.251824   29504 out.go:239] X Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	X Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	W0128 10:56:39.251847   29504 out.go:239] * 
	* 
	W0128 10:56:39.256860   29504 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_addons_26091442b04c5e26589fdfa18b5031c2ff11dd6b_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_addons_26091442b04c5e26589fdfa18b5031c2ff11dd6b_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0128 10:56:39.278600   29504 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:80: failed to enable ingress-dns addon: exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-901000
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-901000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5153d3ba400f800f2b32a2dc2fa0f990238f716862aacce064069db143ece5ba",
	        "Created": "2023-01-28T18:49:37.165157201Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 439741,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-01-28T18:49:37.461845975Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:01c0ce65fff70ab1f019aa14679c46b23331bd108ae899438e589673efaa9c00",
	        "ResolvConfPath": "/var/lib/docker/containers/5153d3ba400f800f2b32a2dc2fa0f990238f716862aacce064069db143ece5ba/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5153d3ba400f800f2b32a2dc2fa0f990238f716862aacce064069db143ece5ba/hostname",
	        "HostsPath": "/var/lib/docker/containers/5153d3ba400f800f2b32a2dc2fa0f990238f716862aacce064069db143ece5ba/hosts",
	        "LogPath": "/var/lib/docker/containers/5153d3ba400f800f2b32a2dc2fa0f990238f716862aacce064069db143ece5ba/5153d3ba400f800f2b32a2dc2fa0f990238f716862aacce064069db143ece5ba-json.log",
	        "Name": "/ingress-addon-legacy-901000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-901000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-901000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/d3ccbf6656724e752c8ac68c6ef2a0ff497ae52813a93b822b387b81475b9d41-init/diff:/var/lib/docker/overlay2/ebc03c916d1215717cc928cc2ae6bb5febcaf1787682b19b31688cb58ea354df/diff:/var/lib/docker/overlay2/aaa47387c6297b9482eaf2d8291628b9713643f21d066c37435b7e2cb9493e2a/diff:/var/lib/docker/overlay2/f4b2c82f60338b3f859441322400906b78ab112321f53e01c52ec81f29b4b492/diff:/var/lib/docker/overlay2/9425b655d46ca09e43b6484556a0c42b69e0c7947e14ec530546a61f36d3b950/diff:/var/lib/docker/overlay2/7d54571f62200ad4404fb9bb52649136f53eb6d6eedc5a51b22898df9001c1d4/diff:/var/lib/docker/overlay2/a4b4864baac235070d93e0940d897dd3006e6a93d705490108451f8d00ba148f/diff:/var/lib/docker/overlay2/8b092a30ffaf1c9230cef4864afb85d91ceb9fa92e484e3ebf7a31bb7df915bc/diff:/var/lib/docker/overlay2/96ac23e2e494a92e2287115c1a85e160e67543832baaaa3fa9a2351b370d5bd4/diff:/var/lib/docker/overlay2/c1e68f2d6c4ce95b33833a8d750a79aeaef16cc7d0a556369a63014eef7597b6/diff:/var/lib/docker/overlay2/89b3fe
fdd4bd8243826ccca31dec1aef9f91ad82adda108147b89c096792dfa5/diff:/var/lib/docker/overlay2/0b09be009751a25e4cbe64835151f1a814c4547d2c513994ae82f8093a22040d/diff:/var/lib/docker/overlay2/dc9a2b1667d67c8f0269966ef8862a4ffcfe4b68ad45f12e3ff27075c595c716/diff:/var/lib/docker/overlay2/d41ab03c6154f92111515bffc37c1d75570fa697ffa380631216096b52bfbc1b/diff:/var/lib/docker/overlay2/549b2cfc0a7d4f81f8d2624b1b2069b66d159ecd7b38148b476bb7a1b9e29100/diff:/var/lib/docker/overlay2/ecd7a1e2ce66c77afcf87a94383f14763eca5c8732c76b1b83765a278db91228/diff:/var/lib/docker/overlay2/6361f06734d312adc4271443765c435c4a7600356d1c6597fb7fa440cf1a2eb4/diff:/var/lib/docker/overlay2/cc7751a853d09ad130dccc1c835daa64e6ba830331636aca6a2a98da95ab52c1/diff:/var/lib/docker/overlay2/6612588f68e64e123a6e5cf6f6da339ee6072f8054f936be6d4f799d6c683e75/diff:/var/lib/docker/overlay2/673e42d3b5998d60bbb5c7c40da29902c3ea35068701966a7e3fd8a923d4a37a/diff:/var/lib/docker/overlay2/115d8a9e167d9b574c1d945d85d46da3ad2688595502524702976fc9b1051464/diff:/var/lib/d
ocker/overlay2/a8a2380c37eec6348eac27c7ee660b1f1d1ef94786cd68f197218066d99d80dd/diff:/var/lib/docker/overlay2/9261c5669bb687df6f9ad1ac00615cdf03b913ab9b3e1ca1a1f1cb6420702325/diff:/var/lib/docker/overlay2/46213bfa914da7941cec1c2c32185400a83c35a74274f39d74ad203ee5688535/diff:/var/lib/docker/overlay2/45ce48252aa0eeb54f2a1c27e570f8e85ac4a1d28a947b81618e608c64e3a700/diff:/var/lib/docker/overlay2/5631fae0fb00254444e3cc059b8b6062ee02fd66eefdf043970883f6724ce682/diff:/var/lib/docker/overlay2/e23ece345ff4dee7248a8e8cbd15cdbaef319d286a6490377fc337feecd6be04/diff:/var/lib/docker/overlay2/004bedb9de21965ae003d62b64a9e6506a10afa328b9af469eb51d3920d9c3b6/diff:/var/lib/docker/overlay2/c0ed692b610507b4315c2a43e64bd682bfdae35a7b4bcba499bba9cfb33121c4/diff:/var/lib/docker/overlay2/8396057830d1ed01256a5ee803b6310c8bf4c6ef3fb0f958240557352a12f3db/diff:/var/lib/docker/overlay2/c8024a29733fe87d5aad124df5ff33e97bcca94ee9fee196a6d51c9474692733/diff:/var/lib/docker/overlay2/9e59b455e481cdabd17790daddef6872e7b6452d1e8de1526998d92ab5f
c008f/diff:/var/lib/docker/overlay2/88cc3ecb1b979acbac3227fd30f3e879629eff2b47f416b3069463900f3e40e0/diff:/var/lib/docker/overlay2/5ef1713ef4e296c4637ccd2823c2b80cb5c53cd757947ff3fc17b7dd2d2dd21c/diff:/var/lib/docker/overlay2/17a697eb9c335b2a20567e3615e2222a113542532402dc62978ff64d65860c5e/diff:/var/lib/docker/overlay2/69e01a154090c42cbf63b88c7e922d483dd2d393fbab64725f79b3ff3800c3c1/diff:/var/lib/docker/overlay2/6ed77ee7b45230567431b0cbfb9cefedfd3f3d7eecf271f20a711bbcc4fdb1b3/diff:/var/lib/docker/overlay2/3bf095c6d6fe582e91d9a9ab0dc5b4d168f93f28ec2488a88f60b63ebf1e22f7/diff:/var/lib/docker/overlay2/cfc3bbbdc2702c8d23d146885b4da1a4482e8af461b5c87426fab855f97417a0/diff:/var/lib/docker/overlay2/1c4944ff8930ced790954d78530aeaf94eeb6c7367b474bdfbad30345cc1276a/diff:/var/lib/docker/overlay2/44cf435555d16eb68c4149bc53e4ae11797c7ddb429332f3d0d36328cb16ea5f/diff:/var/lib/docker/overlay2/4a7b4287594c4da981df984cd6e3910778bfdff2b5560a03d6cdcb589790c8e5/diff:/var/lib/docker/overlay2/76c287aa1bd3a7c3636e82df1bac8ead485e55
7a0fd68fdbfc0d5655d89f7113/diff:/var/lib/docker/overlay2/a2ab65056651b30980d6df9664f682519df2c2fc604d87ddb2bb2ca25b663d5e/diff:/var/lib/docker/overlay2/3a84daa5ad43dd7c27d884672613e37b8a5bed1fa79edee0e951b2e3fa39f21f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d3ccbf6656724e752c8ac68c6ef2a0ff497ae52813a93b822b387b81475b9d41/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d3ccbf6656724e752c8ac68c6ef2a0ff497ae52813a93b822b387b81475b9d41/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d3ccbf6656724e752c8ac68c6ef2a0ff497ae52813a93b822b387b81475b9d41/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-901000",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-901000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-901000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-901000",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-901000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "18ec4a350e09436ce1a1dd8dcc6f0558e16bec1fef56a541b97fc9669413ff44",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58333"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58329"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58330"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58331"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58332"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/18ec4a350e09",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-901000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "5153d3ba400f",
	                        "ingress-addon-legacy-901000"
	                    ],
	                    "NetworkID": "75f5fa8912f9a79397f8005b10f53f0b6e94046f4747d4b6144f67e72664930f",
	                    "EndpointID": "ab06380879e1f23ecc40191323cc45f687a141c76e2fa5fcd320735101d15e31",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-901000 -n ingress-addon-legacy-901000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-901000 -n ingress-addon-legacy-901000: exit status 6 (404.122811ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0128 10:56:39.755243   29613 status.go:415] kubeconfig endpoint: extract IP: "ingress-addon-legacy-901000" does not appear in /Users/jenkins/minikube-integration/15565-24808/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-901000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (89.55s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (0.46s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:171: failed to get Kubernetes client: <nil>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-901000
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-901000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5153d3ba400f800f2b32a2dc2fa0f990238f716862aacce064069db143ece5ba",
	        "Created": "2023-01-28T18:49:37.165157201Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 439741,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-01-28T18:49:37.461845975Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:01c0ce65fff70ab1f019aa14679c46b23331bd108ae899438e589673efaa9c00",
	        "ResolvConfPath": "/var/lib/docker/containers/5153d3ba400f800f2b32a2dc2fa0f990238f716862aacce064069db143ece5ba/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5153d3ba400f800f2b32a2dc2fa0f990238f716862aacce064069db143ece5ba/hostname",
	        "HostsPath": "/var/lib/docker/containers/5153d3ba400f800f2b32a2dc2fa0f990238f716862aacce064069db143ece5ba/hosts",
	        "LogPath": "/var/lib/docker/containers/5153d3ba400f800f2b32a2dc2fa0f990238f716862aacce064069db143ece5ba/5153d3ba400f800f2b32a2dc2fa0f990238f716862aacce064069db143ece5ba-json.log",
	        "Name": "/ingress-addon-legacy-901000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-901000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-901000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/d3ccbf6656724e752c8ac68c6ef2a0ff497ae52813a93b822b387b81475b9d41-init/diff:/var/lib/docker/overlay2/ebc03c916d1215717cc928cc2ae6bb5febcaf1787682b19b31688cb58ea354df/diff:/var/lib/docker/overlay2/aaa47387c6297b9482eaf2d8291628b9713643f21d066c37435b7e2cb9493e2a/diff:/var/lib/docker/overlay2/f4b2c82f60338b3f859441322400906b78ab112321f53e01c52ec81f29b4b492/diff:/var/lib/docker/overlay2/9425b655d46ca09e43b6484556a0c42b69e0c7947e14ec530546a61f36d3b950/diff:/var/lib/docker/overlay2/7d54571f62200ad4404fb9bb52649136f53eb6d6eedc5a51b22898df9001c1d4/diff:/var/lib/docker/overlay2/a4b4864baac235070d93e0940d897dd3006e6a93d705490108451f8d00ba148f/diff:/var/lib/docker/overlay2/8b092a30ffaf1c9230cef4864afb85d91ceb9fa92e484e3ebf7a31bb7df915bc/diff:/var/lib/docker/overlay2/96ac23e2e494a92e2287115c1a85e160e67543832baaaa3fa9a2351b370d5bd4/diff:/var/lib/docker/overlay2/c1e68f2d6c4ce95b33833a8d750a79aeaef16cc7d0a556369a63014eef7597b6/diff:/var/lib/docker/overlay2/89b3fe
fdd4bd8243826ccca31dec1aef9f91ad82adda108147b89c096792dfa5/diff:/var/lib/docker/overlay2/0b09be009751a25e4cbe64835151f1a814c4547d2c513994ae82f8093a22040d/diff:/var/lib/docker/overlay2/dc9a2b1667d67c8f0269966ef8862a4ffcfe4b68ad45f12e3ff27075c595c716/diff:/var/lib/docker/overlay2/d41ab03c6154f92111515bffc37c1d75570fa697ffa380631216096b52bfbc1b/diff:/var/lib/docker/overlay2/549b2cfc0a7d4f81f8d2624b1b2069b66d159ecd7b38148b476bb7a1b9e29100/diff:/var/lib/docker/overlay2/ecd7a1e2ce66c77afcf87a94383f14763eca5c8732c76b1b83765a278db91228/diff:/var/lib/docker/overlay2/6361f06734d312adc4271443765c435c4a7600356d1c6597fb7fa440cf1a2eb4/diff:/var/lib/docker/overlay2/cc7751a853d09ad130dccc1c835daa64e6ba830331636aca6a2a98da95ab52c1/diff:/var/lib/docker/overlay2/6612588f68e64e123a6e5cf6f6da339ee6072f8054f936be6d4f799d6c683e75/diff:/var/lib/docker/overlay2/673e42d3b5998d60bbb5c7c40da29902c3ea35068701966a7e3fd8a923d4a37a/diff:/var/lib/docker/overlay2/115d8a9e167d9b574c1d945d85d46da3ad2688595502524702976fc9b1051464/diff:/var/lib/d
ocker/overlay2/a8a2380c37eec6348eac27c7ee660b1f1d1ef94786cd68f197218066d99d80dd/diff:/var/lib/docker/overlay2/9261c5669bb687df6f9ad1ac00615cdf03b913ab9b3e1ca1a1f1cb6420702325/diff:/var/lib/docker/overlay2/46213bfa914da7941cec1c2c32185400a83c35a74274f39d74ad203ee5688535/diff:/var/lib/docker/overlay2/45ce48252aa0eeb54f2a1c27e570f8e85ac4a1d28a947b81618e608c64e3a700/diff:/var/lib/docker/overlay2/5631fae0fb00254444e3cc059b8b6062ee02fd66eefdf043970883f6724ce682/diff:/var/lib/docker/overlay2/e23ece345ff4dee7248a8e8cbd15cdbaef319d286a6490377fc337feecd6be04/diff:/var/lib/docker/overlay2/004bedb9de21965ae003d62b64a9e6506a10afa328b9af469eb51d3920d9c3b6/diff:/var/lib/docker/overlay2/c0ed692b610507b4315c2a43e64bd682bfdae35a7b4bcba499bba9cfb33121c4/diff:/var/lib/docker/overlay2/8396057830d1ed01256a5ee803b6310c8bf4c6ef3fb0f958240557352a12f3db/diff:/var/lib/docker/overlay2/c8024a29733fe87d5aad124df5ff33e97bcca94ee9fee196a6d51c9474692733/diff:/var/lib/docker/overlay2/9e59b455e481cdabd17790daddef6872e7b6452d1e8de1526998d92ab5f
c008f/diff:/var/lib/docker/overlay2/88cc3ecb1b979acbac3227fd30f3e879629eff2b47f416b3069463900f3e40e0/diff:/var/lib/docker/overlay2/5ef1713ef4e296c4637ccd2823c2b80cb5c53cd757947ff3fc17b7dd2d2dd21c/diff:/var/lib/docker/overlay2/17a697eb9c335b2a20567e3615e2222a113542532402dc62978ff64d65860c5e/diff:/var/lib/docker/overlay2/69e01a154090c42cbf63b88c7e922d483dd2d393fbab64725f79b3ff3800c3c1/diff:/var/lib/docker/overlay2/6ed77ee7b45230567431b0cbfb9cefedfd3f3d7eecf271f20a711bbcc4fdb1b3/diff:/var/lib/docker/overlay2/3bf095c6d6fe582e91d9a9ab0dc5b4d168f93f28ec2488a88f60b63ebf1e22f7/diff:/var/lib/docker/overlay2/cfc3bbbdc2702c8d23d146885b4da1a4482e8af461b5c87426fab855f97417a0/diff:/var/lib/docker/overlay2/1c4944ff8930ced790954d78530aeaf94eeb6c7367b474bdfbad30345cc1276a/diff:/var/lib/docker/overlay2/44cf435555d16eb68c4149bc53e4ae11797c7ddb429332f3d0d36328cb16ea5f/diff:/var/lib/docker/overlay2/4a7b4287594c4da981df984cd6e3910778bfdff2b5560a03d6cdcb589790c8e5/diff:/var/lib/docker/overlay2/76c287aa1bd3a7c3636e82df1bac8ead485e55
7a0fd68fdbfc0d5655d89f7113/diff:/var/lib/docker/overlay2/a2ab65056651b30980d6df9664f682519df2c2fc604d87ddb2bb2ca25b663d5e/diff:/var/lib/docker/overlay2/3a84daa5ad43dd7c27d884672613e37b8a5bed1fa79edee0e951b2e3fa39f21f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d3ccbf6656724e752c8ac68c6ef2a0ff497ae52813a93b822b387b81475b9d41/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d3ccbf6656724e752c8ac68c6ef2a0ff497ae52813a93b822b387b81475b9d41/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d3ccbf6656724e752c8ac68c6ef2a0ff497ae52813a93b822b387b81475b9d41/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-901000",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-901000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-901000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-901000",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-901000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "18ec4a350e09436ce1a1dd8dcc6f0558e16bec1fef56a541b97fc9669413ff44",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58333"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58329"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58330"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58331"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58332"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/18ec4a350e09",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-901000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "5153d3ba400f",
	                        "ingress-addon-legacy-901000"
	                    ],
	                    "NetworkID": "75f5fa8912f9a79397f8005b10f53f0b6e94046f4747d4b6144f67e72664930f",
	                    "EndpointID": "ab06380879e1f23ecc40191323cc45f687a141c76e2fa5fcd320735101d15e31",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-901000 -n ingress-addon-legacy-901000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-901000 -n ingress-addon-legacy-901000: exit status 6 (397.566061ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0128 10:56:40.213720   29627 status.go:415] kubeconfig endpoint: extract IP: "ingress-addon-legacy-901000" does not appear in /Users/jenkins/minikube-integration/15565-24808/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-901000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (0.46s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (71.28s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:128: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.3658844636.exe start -p running-upgrade-062000 --memory=2200 --vm-driver=docker 
E0128 11:17:03.530438   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/addons-582000/client.crt: no such file or directory
E0128 11:17:07.376650   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/functional-251000/client.crt: no such file or directory
version_upgrade_test.go:128: (dbg) Non-zero exit: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.3658844636.exe start -p running-upgrade-062000 --memory=2200 --vm-driver=docker : exit status 70 (55.852089492s)

                                                
                                                
-- stdout --
	* [running-upgrade-062000] minikube v1.9.0 on Darwin 13.2
	  - MINIKUBE_LOCATION=15565
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-24808/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	  - KUBECONFIG=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/legacy_kubeconfig533271328
	* Using the docker driver based on user configuration
	* Pulling base image ...
	* Downloading Kubernetes v1.18.0 preload ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	! StartHost failed, but will try again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-01-28 19:17:06.404440742 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* Deleting "running-upgrade-062000" in docker ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	* StartHost failed again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-01-28 19:17:25.529879115 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	  - Run: "minikube delete -p running-upgrade-062000", then "minikube start -p running-upgrade-062000 --alsologtostderr -v=1" to try again with more logging

                                                
                                                
-- /stdout --
** stderr ** 
	    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 16.95 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 38.97 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 60.22 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 75.23 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 90.76 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 105.58 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 121.39 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 141.12 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 162.76 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 184.19 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 207.55 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 223.56 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
: 239.11 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 249.61 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 262.67 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 275.76 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 288.64 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 301.86 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 314.64 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 329.55 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 350.64 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 364.75 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 376.83 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 388.97 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 400.69 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.
lz4: 411.47 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 417.05 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 425.33 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 435.95 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 447.33 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 458.94 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 474.11 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 489.61 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 503.30 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 519.06 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 539.23 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 542.91 MiB* 
	X Unable to start VM after repeated tries. Please try {{'minikube delete' if possible: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-01-28 19:17:25.529879115 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:128: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.3658844636.exe start -p running-upgrade-062000 --memory=2200 --vm-driver=docker 
version_upgrade_test.go:128: (dbg) Non-zero exit: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.3658844636.exe start -p running-upgrade-062000 --memory=2200 --vm-driver=docker : exit status 70 (4.295444151s)

                                                
                                                
-- stdout --
	* [running-upgrade-062000] minikube v1.9.0 on Darwin 13.2
	  - MINIKUBE_LOCATION=15565
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-24808/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	  - KUBECONFIG=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/legacy_kubeconfig2202595344
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Updating the running docker "running-upgrade-062000" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:128: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.3658844636.exe start -p running-upgrade-062000 --memory=2200 --vm-driver=docker 
version_upgrade_test.go:128: (dbg) Non-zero exit: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.3658844636.exe start -p running-upgrade-062000 --memory=2200 --vm-driver=docker : exit status 70 (4.368949349s)

                                                
                                                
-- stdout --
	* [running-upgrade-062000] minikube v1.9.0 on Darwin 13.2
	  - MINIKUBE_LOCATION=15565
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-24808/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	  - KUBECONFIG=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/legacy_kubeconfig1990663855
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Updating the running docker "running-upgrade-062000" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:134: legacy v1.9.0 start failed: exit status 70
panic.go:522: *** TestRunningBinaryUpgrade FAILED at 2023-01-28 11:17:39.829063 -0800 PST m=+2316.605572681
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestRunningBinaryUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect running-upgrade-062000
helpers_test.go:235: (dbg) docker inspect running-upgrade-062000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5fd76b0e528aff7efc5053323f7e8ff4c83f4daf0db7a881538bd381a288d715",
	        "Created": "2023-01-28T19:17:14.570777726Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 564349,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-01-28T19:17:14.790174583Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:11589cdc9ef4b67a64cc243dd3cf013e81ad02bbed105fc37dc07aa272044680",
	        "ResolvConfPath": "/var/lib/docker/containers/5fd76b0e528aff7efc5053323f7e8ff4c83f4daf0db7a881538bd381a288d715/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5fd76b0e528aff7efc5053323f7e8ff4c83f4daf0db7a881538bd381a288d715/hostname",
	        "HostsPath": "/var/lib/docker/containers/5fd76b0e528aff7efc5053323f7e8ff4c83f4daf0db7a881538bd381a288d715/hosts",
	        "LogPath": "/var/lib/docker/containers/5fd76b0e528aff7efc5053323f7e8ff4c83f4daf0db7a881538bd381a288d715/5fd76b0e528aff7efc5053323f7e8ff4c83f4daf0db7a881538bd381a288d715-json.log",
	        "Name": "/running-upgrade-062000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "running-upgrade-062000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/7fff4c8964e1fd781d8a2f0407cd6db4a2ef8edb698a96b8334b7f7e988c9c31-init/diff:/var/lib/docker/overlay2/f2082e8368827d702c9b897534123c77316a5f99a01a2ecc698ec89dd0e8a00b/diff:/var/lib/docker/overlay2/b7552f8ec85a58c0dc8c1055a356360ec507e18d5ac5f3773d8dcee24b70d60e/diff:/var/lib/docker/overlay2/1b71cb2eff0873f607d971cb941b8afea6e7c40a7bf5386b8d9f3404d37fb3de/diff:/var/lib/docker/overlay2/2e2f1db693cfd333d4daeb80baf4fab0f859df66206a50a784991ae746eb6b08/diff:/var/lib/docker/overlay2/df93a6dbaf0bd330b14cb706b27b98cc8c024b2cfef7dd65f9e863eb228d93c1/diff:/var/lib/docker/overlay2/e1b6999e13f526f1513a4193298162abf99a50546b397c39f376bdcba622b3e1/diff:/var/lib/docker/overlay2/f195710d7c50118df874fdf885422c431610fc6ac2010c2200ef4345c5b2d64a/diff:/var/lib/docker/overlay2/f1fc58d52bb2de6bce96d05a499221a90e72e1384317eb636dcf83396b33e7d7/diff:/var/lib/docker/overlay2/f26fa1480745883a190e1d42242bbbee96e02877913dcf41a61f54876c93cddc/diff:/var/lib/docker/overlay2/563dee
7dac001ba952f4d08587d2bfc26a88659a7277fd827fc88bc5ed3b0617/diff:/var/lib/docker/overlay2/c398ee3d451c35b0eff9bad390e6feb8327dccb33d756c0ec1aaeaf0b07561a1/diff:/var/lib/docker/overlay2/e141d730e31ee69ec1df6689fc546a4ec3853de9484de15045fc23b5a7406bc3/diff:/var/lib/docker/overlay2/ae02f9ebec64d826db3d0d14682f361dfcd86128a1846fd66ec3d014f6a890d8/diff:/var/lib/docker/overlay2/53fc81dcf65012d4c4b871f170af11946003ab3ba8946424b34edc11d3321e05/diff:/var/lib/docker/overlay2/fd0193053b8accc539c62635da0553c6caa5fd9bfe54f15ce464bd10b55508b5/diff:/var/lib/docker/overlay2/cfa8e4768a11a2570a454569de54d90d499ae40feae3858b13fb29bd8cf7ced5/diff:/var/lib/docker/overlay2/44054d6264e6bade67eb78076bcec6ecea32beb741019a1fa190b347f85b3af0/diff:/var/lib/docker/overlay2/4400651b5a8456da2e096cecb017decc6d525ef3b3f1f1ae54ad9f4956ec6168/diff:/var/lib/docker/overlay2/d3d1e0c5641b1dcc7da1481378d754114ac6a5aac7febf4a1c63d4045ce8fe09/diff:/var/lib/docker/overlay2/264806b7a4946f208a9da0e95425d8bf83cc7b27de055edf40f51307b2fe2972/diff:/var/lib/d
ocker/overlay2/4a48420b5f84f99deb556dd0c6c30624ea192d1cf9a1586f2fc8ad69fb653c8c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7fff4c8964e1fd781d8a2f0407cd6db4a2ef8edb698a96b8334b7f7e988c9c31/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7fff4c8964e1fd781d8a2f0407cd6db4a2ef8edb698a96b8334b7f7e988c9c31/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7fff4c8964e1fd781d8a2f0407cd6db4a2ef8edb698a96b8334b7f7e988c9c31/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "running-upgrade-062000",
	                "Source": "/var/lib/docker/volumes/running-upgrade-062000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "running-upgrade-062000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	                "container=docker"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "running-upgrade-062000",
	                "name.minikube.sigs.k8s.io": "running-upgrade-062000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "52ef67e225ff9f814ad3cc7f4a1408eb520c85b3f6feb78cfa4c07a4b79e9d55",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "60404"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "60405"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "60406"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/52ef67e225ff",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "7f5d68c387bb6fb537522bb8eb72520cee280e96e69d774920e2268008febf84",
	            "Gateway": "172.17.0.1",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "172.17.0.2",
	            "IPPrefixLen": 16,
	            "IPv6Gateway": "",
	            "MacAddress": "02:42:ac:11:00:02",
	            "Networks": {
	                "bridge": {
	                    "IPAMConfig": null,
	                    "Links": null,
	                    "Aliases": null,
	                    "NetworkID": "d8afff2198de0b56ef3b30d2c6866a99d956efc295bd21af171990714694cadb",
	                    "EndpointID": "7f5d68c387bb6fb537522bb8eb72520cee280e96e69d774920e2268008febf84",
	                    "Gateway": "172.17.0.1",
	                    "IPAddress": "172.17.0.2",
	                    "IPPrefixLen": 16,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:ac:11:00:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p running-upgrade-062000 -n running-upgrade-062000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p running-upgrade-062000 -n running-upgrade-062000: exit status 6 (388.624964ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0128 11:17:40.265504   36430 status.go:415] kubeconfig endpoint: extract IP: "running-upgrade-062000" does not appear in /Users/jenkins/minikube-integration/15565-24808/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "running-upgrade-062000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "running-upgrade-062000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p running-upgrade-062000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p running-upgrade-062000: (2.360082684s)
--- FAIL: TestRunningBinaryUpgrade (71.28s)

                                                
                                    
x
+
TestKubernetesUpgrade (556.7s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:230: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-325000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker 
E0128 11:18:38.723656   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/skaffold-054000/client.crt: no such file or directory
E0128 11:18:43.844485   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/skaffold-054000/client.crt: no such file or directory
E0128 11:18:54.086082   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/skaffold-054000/client.crt: no such file or directory

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kubernetes-upgrade-325000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker : exit status 109 (4m10.93048926s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-325000] minikube v1.29.0-1674856271-15565 on Darwin 13.2
	  - MINIKUBE_LOCATION=15565
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15565-24808/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-24808/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node kubernetes-upgrade-325000 in cluster kubernetes-upgrade-325000
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* Preparing Kubernetes v1.16.0 on Docker 20.10.23 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0128 11:18:37.507730   36807 out.go:296] Setting OutFile to fd 1 ...
	I0128 11:18:37.507881   36807 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0128 11:18:37.507885   36807 out.go:309] Setting ErrFile to fd 2...
	I0128 11:18:37.507889   36807 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0128 11:18:37.508001   36807 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15565-24808/.minikube/bin
	I0128 11:18:37.508526   36807 out.go:303] Setting JSON to false
	I0128 11:18:37.527295   36807 start.go:125] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":8292,"bootTime":1674925225,"procs":388,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0128 11:18:37.527391   36807 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0128 11:18:37.549184   36807 out.go:177] * [kubernetes-upgrade-325000] minikube v1.29.0-1674856271-15565 on Darwin 13.2
	I0128 11:18:37.591622   36807 notify.go:220] Checking for updates...
	I0128 11:18:37.613922   36807 out.go:177]   - MINIKUBE_LOCATION=15565
	I0128 11:18:37.635640   36807 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15565-24808/kubeconfig
	I0128 11:18:37.656693   36807 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0128 11:18:37.677911   36807 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0128 11:18:37.699880   36807 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-24808/.minikube
	I0128 11:18:37.742646   36807 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0128 11:18:37.764590   36807 config.go:180] Loaded profile config "cert-expiration-294000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0128 11:18:37.764697   36807 driver.go:365] Setting default libvirt URI to qemu:///system
	I0128 11:18:37.825468   36807 docker.go:141] docker version: linux-20.10.22:Docker Desktop 4.16.2 (95914)
	I0128 11:18:37.825597   36807 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0128 11:18:37.967623   36807 info.go:266] docker info: {ID:O4T2:OPZV:SRVN:GAA5:IEOW:TFNR:UPTT:2CKV:GY3E:3NIN:VS7C:3BCB Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:64 OomKillDisable:false NGoroutines:57 SystemTime:2023-01-28 19:18:37.875270342 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0128 11:18:38.042372   36807 out.go:177] * Using the docker driver based on user configuration
	I0128 11:18:38.063418   36807 start.go:296] selected driver: docker
	I0128 11:18:38.063442   36807 start.go:857] validating driver "docker" against <nil>
	I0128 11:18:38.063462   36807 start.go:868] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0128 11:18:38.067342   36807 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0128 11:18:38.209016   36807 info.go:266] docker info: {ID:O4T2:OPZV:SRVN:GAA5:IEOW:TFNR:UPTT:2CKV:GY3E:3NIN:VS7C:3BCB Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:64 OomKillDisable:false NGoroutines:57 SystemTime:2023-01-28 19:18:38.116897039 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0128 11:18:38.209137   36807 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0128 11:18:38.209312   36807 start_flags.go:899] Wait components to verify : map[apiserver:true system_pods:true]
	I0128 11:18:38.232215   36807 out.go:177] * Using Docker Desktop driver with root privileges
	I0128 11:18:38.254200   36807 cni.go:84] Creating CNI manager for ""
	I0128 11:18:38.254238   36807 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0128 11:18:38.254251   36807 start_flags.go:319] config:
	{Name:kubernetes-upgrade-325000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-325000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker C
RISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0128 11:18:38.299307   36807 out.go:177] * Starting control plane node kubernetes-upgrade-325000 in cluster kubernetes-upgrade-325000
	I0128 11:18:38.321170   36807 cache.go:120] Beginning downloading kic base image for docker with docker
	I0128 11:18:38.341995   36807 out.go:177] * Pulling base image ...
	I0128 11:18:38.363287   36807 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0128 11:18:38.363287   36807 image.go:77] Checking for gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 in local docker daemon
	I0128 11:18:38.363385   36807 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15565-24808/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0128 11:18:38.363403   36807 cache.go:57] Caching tarball of preloaded images
	I0128 11:18:38.363612   36807 preload.go:174] Found /Users/jenkins/minikube-integration/15565-24808/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0128 11:18:38.363634   36807 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0128 11:18:38.364603   36807 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/kubernetes-upgrade-325000/config.json ...
	I0128 11:18:38.364753   36807 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/kubernetes-upgrade-325000/config.json: {Name:mk1eea66d2c38c09fc4911649cbdd2492aa10e9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0128 11:18:38.420858   36807 image.go:81] Found gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 in local docker daemon, skipping pull
	I0128 11:18:38.420882   36807 cache.go:143] gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 exists in daemon, skipping load
	I0128 11:18:38.420901   36807 cache.go:193] Successfully downloaded all kic artifacts
	I0128 11:18:38.420950   36807 start.go:364] acquiring machines lock for kubernetes-upgrade-325000: {Name:mk4978fe22171095c379c98403765438299c79d4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0128 11:18:38.421101   36807 start.go:368] acquired machines lock for "kubernetes-upgrade-325000" in 139.732µs
	I0128 11:18:38.421128   36807 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-325000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-325000 Namespace:default APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: S
taticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0128 11:18:38.421196   36807 start.go:125] createHost starting for "" (driver="docker")
	I0128 11:18:38.443089   36807 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0128 11:18:38.443295   36807 start.go:159] libmachine.API.Create for "kubernetes-upgrade-325000" (driver="docker")
	I0128 11:18:38.443331   36807 client.go:168] LocalClient.Create starting
	I0128 11:18:38.443413   36807 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15565-24808/.minikube/certs/ca.pem
	I0128 11:18:38.443473   36807 main.go:141] libmachine: Decoding PEM data...
	I0128 11:18:38.443492   36807 main.go:141] libmachine: Parsing certificate...
	I0128 11:18:38.443544   36807 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15565-24808/.minikube/certs/cert.pem
	I0128 11:18:38.443576   36807 main.go:141] libmachine: Decoding PEM data...
	I0128 11:18:38.443590   36807 main.go:141] libmachine: Parsing certificate...
	I0128 11:18:38.464153   36807 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-325000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0128 11:18:38.518611   36807 cli_runner.go:211] docker network inspect kubernetes-upgrade-325000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0128 11:18:38.518711   36807 network_create.go:281] running [docker network inspect kubernetes-upgrade-325000] to gather additional debugging logs...
	I0128 11:18:38.518730   36807 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-325000
	W0128 11:18:38.572169   36807 cli_runner.go:211] docker network inspect kubernetes-upgrade-325000 returned with exit code 1
	I0128 11:18:38.572194   36807 network_create.go:284] error running [docker network inspect kubernetes-upgrade-325000]: docker network inspect kubernetes-upgrade-325000: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: kubernetes-upgrade-325000
	I0128 11:18:38.572208   36807 network_create.go:286] output of [docker network inspect kubernetes-upgrade-325000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: kubernetes-upgrade-325000
	
	** /stderr **
	I0128 11:18:38.572283   36807 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0128 11:18:38.628074   36807 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0128 11:18:38.628400   36807 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00126c110}
	I0128 11:18:38.628411   36807 network_create.go:123] attempt to create docker network kubernetes-upgrade-325000 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0128 11:18:38.628475   36807 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-325000 kubernetes-upgrade-325000
	W0128 11:18:38.682968   36807 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-325000 kubernetes-upgrade-325000 returned with exit code 1
	W0128 11:18:38.683016   36807 network_create.go:148] failed to create docker network kubernetes-upgrade-325000 192.168.58.0/24 with gateway 192.168.58.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-325000 kubernetes-upgrade-325000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0128 11:18:38.683029   36807 network_create.go:115] failed to create docker network kubernetes-upgrade-325000 192.168.58.0/24, will retry: subnet is taken
	I0128 11:18:38.685090   36807 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0128 11:18:38.685396   36807 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00113cd70}
	I0128 11:18:38.685407   36807 network_create.go:123] attempt to create docker network kubernetes-upgrade-325000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0128 11:18:38.685472   36807 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-325000 kubernetes-upgrade-325000
	W0128 11:18:38.739339   36807 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-325000 kubernetes-upgrade-325000 returned with exit code 1
	W0128 11:18:38.739381   36807 network_create.go:148] failed to create docker network kubernetes-upgrade-325000 192.168.67.0/24 with gateway 192.168.67.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-325000 kubernetes-upgrade-325000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0128 11:18:38.739398   36807 network_create.go:115] failed to create docker network kubernetes-upgrade-325000 192.168.67.0/24, will retry: subnet is taken
	I0128 11:18:38.740868   36807 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0128 11:18:38.741190   36807 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00113dbc0}
	I0128 11:18:38.741203   36807 network_create.go:123] attempt to create docker network kubernetes-upgrade-325000 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0128 11:18:38.741264   36807 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-325000 kubernetes-upgrade-325000
	I0128 11:18:38.828428   36807 network_create.go:107] docker network kubernetes-upgrade-325000 192.168.76.0/24 created
	I0128 11:18:38.828472   36807 kic.go:117] calculated static IP "192.168.76.2" for the "kubernetes-upgrade-325000" container
	I0128 11:18:38.828597   36807 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0128 11:18:38.884319   36807 cli_runner.go:164] Run: docker volume create kubernetes-upgrade-325000 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-325000 --label created_by.minikube.sigs.k8s.io=true
	I0128 11:18:38.940740   36807 oci.go:103] Successfully created a docker volume kubernetes-upgrade-325000
	I0128 11:18:38.940869   36807 cli_runner.go:164] Run: docker run --rm --name kubernetes-upgrade-325000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-325000 --entrypoint /usr/bin/test -v kubernetes-upgrade-325000:/var gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 -d /var/lib
	I0128 11:18:39.386387   36807 oci.go:107] Successfully prepared a docker volume kubernetes-upgrade-325000
	I0128 11:18:39.386415   36807 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0128 11:18:39.386429   36807 kic.go:190] Starting extracting preloaded images to volume ...
	I0128 11:18:39.386546   36807 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15565-24808/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-325000:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 -I lz4 -xf /preloaded.tar -C /extractDir
	I0128 11:18:45.126858   36807 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15565-24808/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-325000:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 -I lz4 -xf /preloaded.tar -C /extractDir: (5.740124987s)
	I0128 11:18:45.126882   36807 kic.go:199] duration metric: took 5.740353 seconds to extract preloaded images to volume
	I0128 11:18:45.126990   36807 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0128 11:18:45.267867   36807 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kubernetes-upgrade-325000 --name kubernetes-upgrade-325000 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-325000 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kubernetes-upgrade-325000 --network kubernetes-upgrade-325000 --ip 192.168.76.2 --volume kubernetes-upgrade-325000:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15
	I0128 11:18:45.628315   36807 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-325000 --format={{.State.Running}}
	I0128 11:18:45.690029   36807 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-325000 --format={{.State.Status}}
	I0128 11:18:45.749383   36807 cli_runner.go:164] Run: docker exec kubernetes-upgrade-325000 stat /var/lib/dpkg/alternatives/iptables
	I0128 11:18:45.854796   36807 oci.go:144] the created container "kubernetes-upgrade-325000" has a running status.
	I0128 11:18:45.854862   36807 kic.go:221] Creating ssh key for kic: /Users/jenkins/minikube-integration/15565-24808/.minikube/machines/kubernetes-upgrade-325000/id_rsa...
	I0128 11:18:45.969555   36807 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/15565-24808/.minikube/machines/kubernetes-upgrade-325000/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0128 11:18:46.141792   36807 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-325000 --format={{.State.Status}}
	I0128 11:18:46.200817   36807 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0128 11:18:46.200835   36807 kic_runner.go:114] Args: [docker exec --privileged kubernetes-upgrade-325000 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0128 11:18:46.304272   36807 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-325000 --format={{.State.Status}}
	I0128 11:18:46.364043   36807 machine.go:88] provisioning docker machine ...
	I0128 11:18:46.364084   36807 ubuntu.go:169] provisioning hostname "kubernetes-upgrade-325000"
	I0128 11:18:46.364185   36807 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-325000
	I0128 11:18:46.420104   36807 main.go:141] libmachine: Using SSH client type: native
	I0128 11:18:46.420298   36807 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 60522 <nil> <nil>}
	I0128 11:18:46.420317   36807 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-325000 && echo "kubernetes-upgrade-325000" | sudo tee /etc/hostname
	I0128 11:18:46.562924   36807 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-325000
	
	I0128 11:18:46.563025   36807 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-325000
	I0128 11:18:46.620221   36807 main.go:141] libmachine: Using SSH client type: native
	I0128 11:18:46.620373   36807 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 60522 <nil> <nil>}
	I0128 11:18:46.620391   36807 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-325000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-325000/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-325000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0128 11:18:46.750250   36807 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0128 11:18:46.750277   36807 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15565-24808/.minikube CaCertPath:/Users/jenkins/minikube-integration/15565-24808/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15565-24808/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15565-24808/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15565-24808/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15565-24808/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15565-24808/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15565-24808/.minikube}
	I0128 11:18:46.750298   36807 ubuntu.go:177] setting up certificates
	I0128 11:18:46.750305   36807 provision.go:83] configureAuth start
	I0128 11:18:46.750376   36807 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-325000
	I0128 11:18:46.810498   36807 provision.go:138] copyHostCerts
	I0128 11:18:46.810605   36807 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-24808/.minikube/ca.pem, removing ...
	I0128 11:18:46.810613   36807 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-24808/.minikube/ca.pem
	I0128 11:18:46.810726   36807 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-24808/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15565-24808/.minikube/ca.pem (1082 bytes)
	I0128 11:18:46.810930   36807 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-24808/.minikube/cert.pem, removing ...
	I0128 11:18:46.810936   36807 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-24808/.minikube/cert.pem
	I0128 11:18:46.810998   36807 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-24808/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15565-24808/.minikube/cert.pem (1123 bytes)
	I0128 11:18:46.811154   36807 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-24808/.minikube/key.pem, removing ...
	I0128 11:18:46.811160   36807 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-24808/.minikube/key.pem
	I0128 11:18:46.811221   36807 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-24808/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15565-24808/.minikube/key.pem (1675 bytes)
	I0128 11:18:46.811349   36807 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15565-24808/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15565-24808/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15565-24808/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-325000 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube kubernetes-upgrade-325000]
	I0128 11:18:47.019225   36807 provision.go:172] copyRemoteCerts
	I0128 11:18:47.019282   36807 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0128 11:18:47.019332   36807 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-325000
	I0128 11:18:47.081591   36807 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60522 SSHKeyPath:/Users/jenkins/minikube-integration/15565-24808/.minikube/machines/kubernetes-upgrade-325000/id_rsa Username:docker}
	I0128 11:18:47.178538   36807 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-24808/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0128 11:18:47.196442   36807 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-24808/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0128 11:18:47.214235   36807 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-24808/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0128 11:18:47.231675   36807 provision.go:86] duration metric: configureAuth took 481.34848ms
	I0128 11:18:47.231691   36807 ubuntu.go:193] setting minikube options for container-runtime
	I0128 11:18:47.231839   36807 config.go:180] Loaded profile config "kubernetes-upgrade-325000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0128 11:18:47.231895   36807 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-325000
	I0128 11:18:47.288915   36807 main.go:141] libmachine: Using SSH client type: native
	I0128 11:18:47.289096   36807 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 60522 <nil> <nil>}
	I0128 11:18:47.289112   36807 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0128 11:18:47.424608   36807 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0128 11:18:47.424622   36807 ubuntu.go:71] root file system type: overlay
	I0128 11:18:47.424790   36807 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0128 11:18:47.424882   36807 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-325000
	I0128 11:18:47.482600   36807 main.go:141] libmachine: Using SSH client type: native
	I0128 11:18:47.482760   36807 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 60522 <nil> <nil>}
	I0128 11:18:47.482808   36807 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0128 11:18:47.624667   36807 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0128 11:18:47.624763   36807 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-325000
	I0128 11:18:47.682870   36807 main.go:141] libmachine: Using SSH client type: native
	I0128 11:18:47.683038   36807 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 60522 <nil> <nil>}
	I0128 11:18:47.683051   36807 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0128 11:18:48.301969   36807 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2023-01-19 17:34:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-01-28 19:18:47.621453389 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0128 11:18:48.301995   36807 machine.go:91] provisioned docker machine in 1.937898356s
	I0128 11:18:48.302001   36807 client.go:171] LocalClient.Create took 9.858493024s
	I0128 11:18:48.302033   36807 start.go:167] duration metric: libmachine.API.Create for "kubernetes-upgrade-325000" took 9.858567142s
	I0128 11:18:48.302041   36807 start.go:300] post-start starting for "kubernetes-upgrade-325000" (driver="docker")
	I0128 11:18:48.302045   36807 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0128 11:18:48.302127   36807 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0128 11:18:48.302201   36807 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-325000
	I0128 11:18:48.360325   36807 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60522 SSHKeyPath:/Users/jenkins/minikube-integration/15565-24808/.minikube/machines/kubernetes-upgrade-325000/id_rsa Username:docker}
	I0128 11:18:48.454330   36807 ssh_runner.go:195] Run: cat /etc/os-release
	I0128 11:18:48.458023   36807 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0128 11:18:48.458037   36807 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0128 11:18:48.458045   36807 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0128 11:18:48.458050   36807 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0128 11:18:48.458061   36807 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15565-24808/.minikube/addons for local assets ...
	I0128 11:18:48.458160   36807 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15565-24808/.minikube/files for local assets ...
	I0128 11:18:48.458348   36807 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15565-24808/.minikube/files/etc/ssl/certs/259822.pem -> 259822.pem in /etc/ssl/certs
	I0128 11:18:48.458542   36807 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0128 11:18:48.465937   36807 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-24808/.minikube/files/etc/ssl/certs/259822.pem --> /etc/ssl/certs/259822.pem (1708 bytes)
	I0128 11:18:48.483277   36807 start.go:303] post-start completed in 181.22443ms
	I0128 11:18:48.483809   36807 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-325000
	I0128 11:18:48.541503   36807 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/kubernetes-upgrade-325000/config.json ...
	I0128 11:18:48.541930   36807 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0128 11:18:48.541992   36807 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-325000
	I0128 11:18:48.599786   36807 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60522 SSHKeyPath:/Users/jenkins/minikube-integration/15565-24808/.minikube/machines/kubernetes-upgrade-325000/id_rsa Username:docker}
	I0128 11:18:48.691548   36807 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0128 11:18:48.696524   36807 start.go:128] duration metric: createHost completed in 10.275136747s
	I0128 11:18:48.696541   36807 start.go:83] releasing machines lock for "kubernetes-upgrade-325000", held for 10.275253942s
	I0128 11:18:48.696619   36807 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-325000
	I0128 11:18:48.753721   36807 ssh_runner.go:195] Run: cat /version.json
	I0128 11:18:48.753740   36807 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0128 11:18:48.753790   36807 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-325000
	I0128 11:18:48.753823   36807 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-325000
	I0128 11:18:48.815187   36807 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60522 SSHKeyPath:/Users/jenkins/minikube-integration/15565-24808/.minikube/machines/kubernetes-upgrade-325000/id_rsa Username:docker}
	I0128 11:18:48.815461   36807 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60522 SSHKeyPath:/Users/jenkins/minikube-integration/15565-24808/.minikube/machines/kubernetes-upgrade-325000/id_rsa Username:docker}
	W0128 11:18:49.104577   36807 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.29.0 -> Actual minikube version: v1.29.0-1674856271-15565
	! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.29.0 -> Actual minikube version: v1.29.0-1674856271-15565
	I0128 11:18:49.104683   36807 ssh_runner.go:195] Run: systemctl --version
	I0128 11:18:49.109718   36807 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0128 11:18:49.114931   36807 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0128 11:18:49.135524   36807 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0128 11:18:49.135593   36807 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0128 11:18:49.150289   36807 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0128 11:18:49.158022   36807 cni.go:307] configured [/etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0128 11:18:49.158039   36807 start.go:483] detecting cgroup driver to use...
	I0128 11:18:49.158051   36807 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0128 11:18:49.158148   36807 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0128 11:18:49.171269   36807 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "k8s.gcr.io/pause:3.1"|' /etc/containerd/config.toml"
	I0128 11:18:49.180097   36807 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0128 11:18:49.188914   36807 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0128 11:18:49.188974   36807 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0128 11:18:49.198180   36807 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0128 11:18:49.206671   36807 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0128 11:18:49.215145   36807 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0128 11:18:49.223679   36807 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0128 11:18:49.231641   36807 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0128 11:18:49.240325   36807 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0128 11:18:49.248089   36807 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0128 11:18:49.255372   36807 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0128 11:18:49.323673   36807 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0128 11:18:49.391746   36807 start.go:483] detecting cgroup driver to use...
	I0128 11:18:49.391764   36807 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0128 11:18:49.391819   36807 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0128 11:18:49.403056   36807 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0128 11:18:49.403122   36807 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0128 11:18:49.414505   36807 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0128 11:18:49.428682   36807 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0128 11:18:49.514824   36807 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0128 11:18:49.604146   36807 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0128 11:18:49.604169   36807 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0128 11:18:49.617638   36807 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0128 11:18:49.711710   36807 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0128 11:18:49.914748   36807 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0128 11:18:49.946233   36807 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0128 11:18:50.019852   36807 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 20.10.23 ...
	I0128 11:18:50.020018   36807 cli_runner.go:164] Run: docker exec -t kubernetes-upgrade-325000 dig +short host.docker.internal
	I0128 11:18:50.139561   36807 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0128 11:18:50.139661   36807 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0128 11:18:50.144115   36807 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0128 11:18:50.154005   36807 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-325000
	I0128 11:18:50.211563   36807 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0128 11:18:50.211646   36807 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0128 11:18:50.236417   36807 docker.go:630] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0128 11:18:50.236434   36807 docker.go:560] Images already preloaded, skipping extraction
	I0128 11:18:50.236526   36807 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0128 11:18:50.260300   36807 docker.go:630] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0128 11:18:50.260320   36807 cache_images.go:84] Images are preloaded, skipping loading
	I0128 11:18:50.260406   36807 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0128 11:18:50.329679   36807 cni.go:84] Creating CNI manager for ""
	I0128 11:18:50.329695   36807 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0128 11:18:50.329709   36807 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0128 11:18:50.329730   36807 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-325000 NodeName:kubernetes-upgrade-325000 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt S
taticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0128 11:18:50.329848   36807 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "kubernetes-upgrade-325000"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: kubernetes-upgrade-325000
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.76.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0128 11:18:50.329932   36807 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=kubernetes-upgrade-325000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-325000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0128 11:18:50.329996   36807 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0128 11:18:50.338031   36807 binaries.go:44] Found k8s binaries, skipping transfer
	I0128 11:18:50.338096   36807 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0128 11:18:50.345476   36807 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (351 bytes)
	I0128 11:18:50.358376   36807 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0128 11:18:50.371134   36807 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2180 bytes)
	I0128 11:18:50.384169   36807 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0128 11:18:50.388053   36807 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0128 11:18:50.398164   36807 certs.go:56] Setting up /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/kubernetes-upgrade-325000 for IP: 192.168.76.2
	I0128 11:18:50.398186   36807 certs.go:186] acquiring lock for shared ca certs: {Name:mk223e4eab41546e140aa3e3e480564c04fddab4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0128 11:18:50.398384   36807 certs.go:195] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15565-24808/.minikube/ca.key
	I0128 11:18:50.398456   36807 certs.go:195] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15565-24808/.minikube/proxy-client-ca.key
	I0128 11:18:50.398509   36807 certs.go:315] generating minikube-user signed cert: /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/kubernetes-upgrade-325000/client.key
	I0128 11:18:50.398524   36807 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/kubernetes-upgrade-325000/client.crt with IP's: []
	I0128 11:18:50.510118   36807 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/kubernetes-upgrade-325000/client.crt ...
	I0128 11:18:50.510132   36807 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/kubernetes-upgrade-325000/client.crt: {Name:mk7af0819567fdbabbdc211bccb4a402b556c37a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0128 11:18:50.510474   36807 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/kubernetes-upgrade-325000/client.key ...
	I0128 11:18:50.510482   36807 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/kubernetes-upgrade-325000/client.key: {Name:mk24211f194aa01ba9bfad95a04dab833654d317 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0128 11:18:50.510712   36807 certs.go:315] generating minikube signed cert: /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/kubernetes-upgrade-325000/apiserver.key.31bdca25
	I0128 11:18:50.510727   36807 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/kubernetes-upgrade-325000/apiserver.crt.31bdca25 with IP's: [192.168.76.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0128 11:18:50.635709   36807 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/kubernetes-upgrade-325000/apiserver.crt.31bdca25 ...
	I0128 11:18:50.635718   36807 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/kubernetes-upgrade-325000/apiserver.crt.31bdca25: {Name:mke6dba603cfc10caac548bd088e086ab71636bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0128 11:18:50.635939   36807 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/kubernetes-upgrade-325000/apiserver.key.31bdca25 ...
	I0128 11:18:50.635946   36807 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/kubernetes-upgrade-325000/apiserver.key.31bdca25: {Name:mk457c13bf0ddf7137064d627e7c2b5d8151ba20 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0128 11:18:50.636128   36807 certs.go:333] copying /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/kubernetes-upgrade-325000/apiserver.crt.31bdca25 -> /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/kubernetes-upgrade-325000/apiserver.crt
	I0128 11:18:50.636414   36807 certs.go:337] copying /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/kubernetes-upgrade-325000/apiserver.key.31bdca25 -> /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/kubernetes-upgrade-325000/apiserver.key
	I0128 11:18:50.636584   36807 certs.go:315] generating aggregator signed cert: /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/kubernetes-upgrade-325000/proxy-client.key
	I0128 11:18:50.636599   36807 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/kubernetes-upgrade-325000/proxy-client.crt with IP's: []
	I0128 11:18:50.758156   36807 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/kubernetes-upgrade-325000/proxy-client.crt ...
	I0128 11:18:50.758165   36807 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/kubernetes-upgrade-325000/proxy-client.crt: {Name:mke74e5e8907b4317865a41198d05dc677b107a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0128 11:18:50.758383   36807 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/kubernetes-upgrade-325000/proxy-client.key ...
	I0128 11:18:50.758390   36807 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/kubernetes-upgrade-325000/proxy-client.key: {Name:mkd9904deca62e9cbcc95380a195775e19a42ddb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0128 11:18:50.758756   36807 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-24808/.minikube/certs/Users/jenkins/minikube-integration/15565-24808/.minikube/certs/25982.pem (1338 bytes)
	W0128 11:18:50.758803   36807 certs.go:397] ignoring /Users/jenkins/minikube-integration/15565-24808/.minikube/certs/Users/jenkins/minikube-integration/15565-24808/.minikube/certs/25982_empty.pem, impossibly tiny 0 bytes
	I0128 11:18:50.758836   36807 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-24808/.minikube/certs/Users/jenkins/minikube-integration/15565-24808/.minikube/certs/ca-key.pem (1675 bytes)
	I0128 11:18:50.758888   36807 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-24808/.minikube/certs/Users/jenkins/minikube-integration/15565-24808/.minikube/certs/ca.pem (1082 bytes)
	I0128 11:18:50.758948   36807 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-24808/.minikube/certs/Users/jenkins/minikube-integration/15565-24808/.minikube/certs/cert.pem (1123 bytes)
	I0128 11:18:50.758996   36807 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-24808/.minikube/certs/Users/jenkins/minikube-integration/15565-24808/.minikube/certs/key.pem (1675 bytes)
	I0128 11:18:50.759087   36807 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-24808/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15565-24808/.minikube/files/etc/ssl/certs/259822.pem (1708 bytes)
	I0128 11:18:50.759766   36807 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/kubernetes-upgrade-325000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0128 11:18:50.778664   36807 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/kubernetes-upgrade-325000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0128 11:18:50.796172   36807 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/kubernetes-upgrade-325000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0128 11:18:50.813708   36807 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/kubernetes-upgrade-325000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0128 11:18:50.831600   36807 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-24808/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0128 11:18:50.849050   36807 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-24808/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0128 11:18:50.866816   36807 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-24808/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0128 11:18:50.884764   36807 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-24808/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0128 11:18:50.902343   36807 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-24808/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0128 11:18:50.920546   36807 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-24808/.minikube/certs/25982.pem --> /usr/share/ca-certificates/25982.pem (1338 bytes)
	I0128 11:18:50.938133   36807 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-24808/.minikube/files/etc/ssl/certs/259822.pem --> /usr/share/ca-certificates/259822.pem (1708 bytes)
	I0128 11:18:50.955616   36807 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (772 bytes)
	I0128 11:18:50.968547   36807 ssh_runner.go:195] Run: openssl version
	I0128 11:18:50.974542   36807 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0128 11:18:50.983029   36807 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0128 11:18:50.987135   36807 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Jan 28 18:40 /usr/share/ca-certificates/minikubeCA.pem
	I0128 11:18:50.987188   36807 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0128 11:18:50.992943   36807 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0128 11:18:51.001438   36807 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/25982.pem && ln -fs /usr/share/ca-certificates/25982.pem /etc/ssl/certs/25982.pem"
	I0128 11:18:51.009608   36807 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/25982.pem
	I0128 11:18:51.013897   36807 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Jan 28 18:44 /usr/share/ca-certificates/25982.pem
	I0128 11:18:51.013945   36807 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/25982.pem
	I0128 11:18:51.019665   36807 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/25982.pem /etc/ssl/certs/51391683.0"
	I0128 11:18:51.027986   36807 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/259822.pem && ln -fs /usr/share/ca-certificates/259822.pem /etc/ssl/certs/259822.pem"
	I0128 11:18:51.036729   36807 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/259822.pem
	I0128 11:18:51.040699   36807 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Jan 28 18:44 /usr/share/ca-certificates/259822.pem
	I0128 11:18:51.040747   36807 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/259822.pem
	I0128 11:18:51.046383   36807 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/259822.pem /etc/ssl/certs/3ec20f2e.0"
	I0128 11:18:51.054690   36807 kubeadm.go:401] StartCluster: {Name:kubernetes-upgrade-325000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-325000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0128 11:18:51.054793   36807 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0128 11:18:51.077607   36807 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0128 11:18:51.085691   36807 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0128 11:18:51.093590   36807 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0128 11:18:51.093647   36807 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0128 11:18:51.101400   36807 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0128 11:18:51.101432   36807 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0128 11:18:51.148558   36807 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0128 11:18:51.148614   36807 kubeadm.go:322] [preflight] Running pre-flight checks
	I0128 11:18:51.446126   36807 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0128 11:18:51.446223   36807 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0128 11:18:51.446314   36807 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0128 11:18:51.670099   36807 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0128 11:18:51.671634   36807 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0128 11:18:51.678245   36807 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0128 11:18:51.750368   36807 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0128 11:18:51.772077   36807 out.go:204]   - Generating certificates and keys ...
	I0128 11:18:51.772197   36807 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0128 11:18:51.772280   36807 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0128 11:18:51.993396   36807 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0128 11:18:52.277369   36807 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0128 11:18:52.431470   36807 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0128 11:18:52.559622   36807 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0128 11:18:52.672127   36807 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0128 11:18:52.672246   36807 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-325000 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0128 11:18:52.780245   36807 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0128 11:18:52.780355   36807 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-325000 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0128 11:18:52.871642   36807 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0128 11:18:53.033559   36807 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0128 11:18:53.260974   36807 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0128 11:18:53.261039   36807 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0128 11:18:53.308423   36807 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0128 11:18:53.445276   36807 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0128 11:18:53.525683   36807 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0128 11:18:53.608956   36807 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0128 11:18:53.609501   36807 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0128 11:18:53.631354   36807 out.go:204]   - Booting up control plane ...
	I0128 11:18:53.631562   36807 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0128 11:18:53.631723   36807 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0128 11:18:53.631840   36807 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0128 11:18:53.631980   36807 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0128 11:18:53.632168   36807 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0128 11:19:33.618198   36807 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0128 11:19:33.618570   36807 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0128 11:19:33.618749   36807 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0128 11:19:38.619456   36807 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0128 11:19:38.620533   36807 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0128 11:19:48.620565   36807 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0128 11:19:48.620734   36807 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0128 11:20:08.623652   36807 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0128 11:20:08.623883   36807 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0128 11:20:48.557468   36807 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0128 11:20:48.557694   36807 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0128 11:20:48.557755   36807 kubeadm.go:322] 
	I0128 11:20:48.557800   36807 kubeadm.go:322] Unfortunately, an error has occurred:
	I0128 11:20:48.557850   36807 kubeadm.go:322] 	timed out waiting for the condition
	I0128 11:20:48.557861   36807 kubeadm.go:322] 
	I0128 11:20:48.557898   36807 kubeadm.go:322] This error is likely caused by:
	I0128 11:20:48.557935   36807 kubeadm.go:322] 	- The kubelet is not running
	I0128 11:20:48.558099   36807 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0128 11:20:48.558111   36807 kubeadm.go:322] 
	I0128 11:20:48.558247   36807 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0128 11:20:48.558292   36807 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0128 11:20:48.558335   36807 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0128 11:20:48.558343   36807 kubeadm.go:322] 
	I0128 11:20:48.558480   36807 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0128 11:20:48.558573   36807 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0128 11:20:48.558670   36807 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0128 11:20:48.558747   36807 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0128 11:20:48.558834   36807 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0128 11:20:48.558870   36807 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0128 11:20:48.561808   36807 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0128 11:20:48.561898   36807 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0128 11:20:48.562010   36807 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.23. Latest validated version: 18.09
	I0128 11:20:48.562170   36807 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0128 11:20:48.562323   36807 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0128 11:20:48.562470   36807 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	W0128 11:20:48.562636   36807 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-325000 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-325000 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.23. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-325000 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-325000 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.23. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0128 11:20:48.562684   36807 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0128 11:20:49.011593   36807 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0128 11:20:49.027558   36807 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0128 11:20:49.027625   36807 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0128 11:20:49.040792   36807 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0128 11:20:49.040824   36807 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0128 11:20:49.102457   36807 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0128 11:20:49.102845   36807 kubeadm.go:322] [preflight] Running pre-flight checks
	I0128 11:20:49.525443   36807 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0128 11:20:49.525565   36807 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0128 11:20:49.525680   36807 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0128 11:20:49.832503   36807 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0128 11:20:49.833440   36807 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0128 11:20:49.841455   36807 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0128 11:20:49.922210   36807 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0128 11:20:49.948970   36807 out.go:204]   - Generating certificates and keys ...
	I0128 11:20:49.949040   36807 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0128 11:20:49.949108   36807 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0128 11:20:49.949170   36807 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0128 11:20:49.949223   36807 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0128 11:20:49.949332   36807 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0128 11:20:49.949381   36807 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0128 11:20:49.949482   36807 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0128 11:20:49.949544   36807 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0128 11:20:49.949634   36807 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0128 11:20:49.949714   36807 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0128 11:20:49.949763   36807 kubeadm.go:322] [certs] Using the existing "sa" key
	I0128 11:20:49.949826   36807 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0128 11:20:50.027094   36807 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0128 11:20:50.210955   36807 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0128 11:20:50.566557   36807 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0128 11:20:50.743486   36807 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0128 11:20:50.744047   36807 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0128 11:20:50.770625   36807 out.go:204]   - Booting up control plane ...
	I0128 11:20:50.770840   36807 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0128 11:20:50.771025   36807 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0128 11:20:50.771154   36807 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0128 11:20:50.771315   36807 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0128 11:20:50.771547   36807 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0128 11:21:30.753173   36807 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0128 11:21:30.753674   36807 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0128 11:21:30.753831   36807 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0128 11:21:35.755432   36807 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0128 11:21:35.755640   36807 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0128 11:21:45.757398   36807 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0128 11:21:45.757623   36807 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0128 11:22:05.758968   36807 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0128 11:22:05.759194   36807 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0128 11:22:45.760135   36807 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0128 11:22:45.760365   36807 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0128 11:22:45.760377   36807 kubeadm.go:322] 
	I0128 11:22:45.760429   36807 kubeadm.go:322] Unfortunately, an error has occurred:
	I0128 11:22:45.760458   36807 kubeadm.go:322] 	timed out waiting for the condition
	I0128 11:22:45.760466   36807 kubeadm.go:322] 
	I0128 11:22:45.760509   36807 kubeadm.go:322] This error is likely caused by:
	I0128 11:22:45.760539   36807 kubeadm.go:322] 	- The kubelet is not running
	I0128 11:22:45.760626   36807 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0128 11:22:45.760635   36807 kubeadm.go:322] 
	I0128 11:22:45.760729   36807 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0128 11:22:45.760760   36807 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0128 11:22:45.760788   36807 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0128 11:22:45.760795   36807 kubeadm.go:322] 
	I0128 11:22:45.760893   36807 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0128 11:22:45.760975   36807 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0128 11:22:45.761044   36807 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0128 11:22:45.761086   36807 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0128 11:22:45.761152   36807 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0128 11:22:45.761179   36807 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0128 11:22:45.763961   36807 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0128 11:22:45.764024   36807 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0128 11:22:45.764129   36807 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.23. Latest validated version: 18.09
	I0128 11:22:45.764228   36807 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0128 11:22:45.764295   36807 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0128 11:22:45.764351   36807 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0128 11:22:45.764376   36807 kubeadm.go:403] StartCluster complete in 3m54.774849368s
	I0128 11:22:45.764465   36807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0128 11:22:45.787503   36807 logs.go:279] 0 containers: []
	W0128 11:22:45.787518   36807 logs.go:281] No container was found matching "kube-apiserver"
	I0128 11:22:45.787590   36807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0128 11:22:45.810013   36807 logs.go:279] 0 containers: []
	W0128 11:22:45.810028   36807 logs.go:281] No container was found matching "etcd"
	I0128 11:22:45.810110   36807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0128 11:22:45.832857   36807 logs.go:279] 0 containers: []
	W0128 11:22:45.832870   36807 logs.go:281] No container was found matching "coredns"
	I0128 11:22:45.832931   36807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0128 11:22:45.856135   36807 logs.go:279] 0 containers: []
	W0128 11:22:45.856148   36807 logs.go:281] No container was found matching "kube-scheduler"
	I0128 11:22:45.856217   36807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0128 11:22:45.879141   36807 logs.go:279] 0 containers: []
	W0128 11:22:45.879154   36807 logs.go:281] No container was found matching "kube-proxy"
	I0128 11:22:45.879221   36807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0128 11:22:45.902504   36807 logs.go:279] 0 containers: []
	W0128 11:22:45.902518   36807 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0128 11:22:45.902588   36807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0128 11:22:45.927181   36807 logs.go:279] 0 containers: []
	W0128 11:22:45.927195   36807 logs.go:281] No container was found matching "storage-provisioner"
	I0128 11:22:45.927263   36807 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0128 11:22:45.951770   36807 logs.go:279] 0 containers: []
	W0128 11:22:45.951785   36807 logs.go:281] No container was found matching "kube-controller-manager"
	I0128 11:22:45.951791   36807 logs.go:124] Gathering logs for container status ...
	I0128 11:22:45.951798   36807 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0128 11:22:48.003048   36807 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.051230045s)
	I0128 11:22:48.003183   36807 logs.go:124] Gathering logs for kubelet ...
	I0128 11:22:48.003192   36807 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0128 11:22:48.040506   36807 logs.go:124] Gathering logs for dmesg ...
	I0128 11:22:48.040521   36807 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0128 11:22:48.054260   36807 logs.go:124] Gathering logs for describe nodes ...
	I0128 11:22:48.054272   36807 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0128 11:22:48.109066   36807 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0128 11:22:48.109078   36807 logs.go:124] Gathering logs for Docker ...
	I0128 11:22:48.109084   36807 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	W0128 11:22:48.126133   36807 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.23. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0128 11:22:48.126154   36807 out.go:239] * 
	* 
	W0128 11:22:48.126256   36807 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.23. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.23. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0128 11:22:48.126269   36807 out.go:239] * 
	* 
	W0128 11:22:48.126903   36807 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0128 11:22:48.190351   36807 out.go:177] 
	W0128 11:22:48.232453   36807 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.23. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.23. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0128 11:22:48.232526   36807 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0128 11:22:48.232580   36807 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0128 11:22:48.253394   36807 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:232: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-amd64 start -p kubernetes-upgrade-325000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker : exit status 109
version_upgrade_test.go:235: (dbg) Run:  out/minikube-darwin-amd64 stop -p kubernetes-upgrade-325000
version_upgrade_test.go:235: (dbg) Done: out/minikube-darwin-amd64 stop -p kubernetes-upgrade-325000: (1.588005028s)
version_upgrade_test.go:240: (dbg) Run:  out/minikube-darwin-amd64 -p kubernetes-upgrade-325000 status --format={{.Host}}
version_upgrade_test.go:240: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p kubernetes-upgrade-325000 status --format={{.Host}}: exit status 7 (113.301076ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:242: status error: exit status 7 (may be ok)
version_upgrade_test.go:251: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-325000 --memory=2200 --kubernetes-version=v1.26.1 --alsologtostderr -v=1 --driver=docker 

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:251: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-325000 --memory=2200 --kubernetes-version=v1.26.1 --alsologtostderr -v=1 --driver=docker : (4m36.408717144s)
version_upgrade_test.go:256: (dbg) Run:  kubectl --context kubernetes-upgrade-325000 version --output=json
version_upgrade_test.go:275: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:277: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-325000 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker 
version_upgrade_test.go:277: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kubernetes-upgrade-325000 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker : exit status 106 (444.712882ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-325000] minikube v1.29.0-1674856271-15565 on Darwin 13.2
	  - MINIKUBE_LOCATION=15565
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15565-24808/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-24808/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.26.1 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-325000
	    minikube start -p kubernetes-upgrade-325000 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-3250002 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.26.1, by running:
	    
	    minikube start -p kubernetes-upgrade-325000 --kubernetes-version=v1.26.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:281: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:283: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-325000 --memory=2200 --kubernetes-version=v1.26.1 --alsologtostderr -v=1 --driver=docker 

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:283: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-325000 --memory=2200 --kubernetes-version=v1.26.1 --alsologtostderr -v=1 --driver=docker : (20.150797102s)
version_upgrade_test.go:287: *** TestKubernetesUpgrade FAILED at 2023-01-28 11:27:47.104952 -0800 PST m=+2923.944633318
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestKubernetesUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect kubernetes-upgrade-325000
helpers_test.go:235: (dbg) docker inspect kubernetes-upgrade-325000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c7634b4fa32c5abd5328dc0e02931235b64ce86cce9f36459cc05ef89bf6a92b",
	        "Created": "2023-01-28T19:18:45.322059229Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 591646,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-01-28T19:22:51.449515826Z",
	            "FinishedAt": "2023-01-28T19:22:48.79111497Z"
	        },
	        "Image": "sha256:01c0ce65fff70ab1f019aa14679c46b23331bd108ae899438e589673efaa9c00",
	        "ResolvConfPath": "/var/lib/docker/containers/c7634b4fa32c5abd5328dc0e02931235b64ce86cce9f36459cc05ef89bf6a92b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c7634b4fa32c5abd5328dc0e02931235b64ce86cce9f36459cc05ef89bf6a92b/hostname",
	        "HostsPath": "/var/lib/docker/containers/c7634b4fa32c5abd5328dc0e02931235b64ce86cce9f36459cc05ef89bf6a92b/hosts",
	        "LogPath": "/var/lib/docker/containers/c7634b4fa32c5abd5328dc0e02931235b64ce86cce9f36459cc05ef89bf6a92b/c7634b4fa32c5abd5328dc0e02931235b64ce86cce9f36459cc05ef89bf6a92b-json.log",
	        "Name": "/kubernetes-upgrade-325000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "kubernetes-upgrade-325000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "kubernetes-upgrade-325000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/9adb848caec0abc08d027ccd1c35bdff12a3a4793300d083dae45d9fd833c7d4-init/diff:/var/lib/docker/overlay2/ebc03c916d1215717cc928cc2ae6bb5febcaf1787682b19b31688cb58ea354df/diff:/var/lib/docker/overlay2/aaa47387c6297b9482eaf2d8291628b9713643f21d066c37435b7e2cb9493e2a/diff:/var/lib/docker/overlay2/f4b2c82f60338b3f859441322400906b78ab112321f53e01c52ec81f29b4b492/diff:/var/lib/docker/overlay2/9425b655d46ca09e43b6484556a0c42b69e0c7947e14ec530546a61f36d3b950/diff:/var/lib/docker/overlay2/7d54571f62200ad4404fb9bb52649136f53eb6d6eedc5a51b22898df9001c1d4/diff:/var/lib/docker/overlay2/a4b4864baac235070d93e0940d897dd3006e6a93d705490108451f8d00ba148f/diff:/var/lib/docker/overlay2/8b092a30ffaf1c9230cef4864afb85d91ceb9fa92e484e3ebf7a31bb7df915bc/diff:/var/lib/docker/overlay2/96ac23e2e494a92e2287115c1a85e160e67543832baaaa3fa9a2351b370d5bd4/diff:/var/lib/docker/overlay2/c1e68f2d6c4ce95b33833a8d750a79aeaef16cc7d0a556369a63014eef7597b6/diff:/var/lib/docker/overlay2/89b3fe
fdd4bd8243826ccca31dec1aef9f91ad82adda108147b89c096792dfa5/diff:/var/lib/docker/overlay2/0b09be009751a25e4cbe64835151f1a814c4547d2c513994ae82f8093a22040d/diff:/var/lib/docker/overlay2/dc9a2b1667d67c8f0269966ef8862a4ffcfe4b68ad45f12e3ff27075c595c716/diff:/var/lib/docker/overlay2/d41ab03c6154f92111515bffc37c1d75570fa697ffa380631216096b52bfbc1b/diff:/var/lib/docker/overlay2/549b2cfc0a7d4f81f8d2624b1b2069b66d159ecd7b38148b476bb7a1b9e29100/diff:/var/lib/docker/overlay2/ecd7a1e2ce66c77afcf87a94383f14763eca5c8732c76b1b83765a278db91228/diff:/var/lib/docker/overlay2/6361f06734d312adc4271443765c435c4a7600356d1c6597fb7fa440cf1a2eb4/diff:/var/lib/docker/overlay2/cc7751a853d09ad130dccc1c835daa64e6ba830331636aca6a2a98da95ab52c1/diff:/var/lib/docker/overlay2/6612588f68e64e123a6e5cf6f6da339ee6072f8054f936be6d4f799d6c683e75/diff:/var/lib/docker/overlay2/673e42d3b5998d60bbb5c7c40da29902c3ea35068701966a7e3fd8a923d4a37a/diff:/var/lib/docker/overlay2/115d8a9e167d9b574c1d945d85d46da3ad2688595502524702976fc9b1051464/diff:/var/lib/d
ocker/overlay2/a8a2380c37eec6348eac27c7ee660b1f1d1ef94786cd68f197218066d99d80dd/diff:/var/lib/docker/overlay2/9261c5669bb687df6f9ad1ac00615cdf03b913ab9b3e1ca1a1f1cb6420702325/diff:/var/lib/docker/overlay2/46213bfa914da7941cec1c2c32185400a83c35a74274f39d74ad203ee5688535/diff:/var/lib/docker/overlay2/45ce48252aa0eeb54f2a1c27e570f8e85ac4a1d28a947b81618e608c64e3a700/diff:/var/lib/docker/overlay2/5631fae0fb00254444e3cc059b8b6062ee02fd66eefdf043970883f6724ce682/diff:/var/lib/docker/overlay2/e23ece345ff4dee7248a8e8cbd15cdbaef319d286a6490377fc337feecd6be04/diff:/var/lib/docker/overlay2/004bedb9de21965ae003d62b64a9e6506a10afa328b9af469eb51d3920d9c3b6/diff:/var/lib/docker/overlay2/c0ed692b610507b4315c2a43e64bd682bfdae35a7b4bcba499bba9cfb33121c4/diff:/var/lib/docker/overlay2/8396057830d1ed01256a5ee803b6310c8bf4c6ef3fb0f958240557352a12f3db/diff:/var/lib/docker/overlay2/c8024a29733fe87d5aad124df5ff33e97bcca94ee9fee196a6d51c9474692733/diff:/var/lib/docker/overlay2/9e59b455e481cdabd17790daddef6872e7b6452d1e8de1526998d92ab5f
c008f/diff:/var/lib/docker/overlay2/88cc3ecb1b979acbac3227fd30f3e879629eff2b47f416b3069463900f3e40e0/diff:/var/lib/docker/overlay2/5ef1713ef4e296c4637ccd2823c2b80cb5c53cd757947ff3fc17b7dd2d2dd21c/diff:/var/lib/docker/overlay2/17a697eb9c335b2a20567e3615e2222a113542532402dc62978ff64d65860c5e/diff:/var/lib/docker/overlay2/69e01a154090c42cbf63b88c7e922d483dd2d393fbab64725f79b3ff3800c3c1/diff:/var/lib/docker/overlay2/6ed77ee7b45230567431b0cbfb9cefedfd3f3d7eecf271f20a711bbcc4fdb1b3/diff:/var/lib/docker/overlay2/3bf095c6d6fe582e91d9a9ab0dc5b4d168f93f28ec2488a88f60b63ebf1e22f7/diff:/var/lib/docker/overlay2/cfc3bbbdc2702c8d23d146885b4da1a4482e8af461b5c87426fab855f97417a0/diff:/var/lib/docker/overlay2/1c4944ff8930ced790954d78530aeaf94eeb6c7367b474bdfbad30345cc1276a/diff:/var/lib/docker/overlay2/44cf435555d16eb68c4149bc53e4ae11797c7ddb429332f3d0d36328cb16ea5f/diff:/var/lib/docker/overlay2/4a7b4287594c4da981df984cd6e3910778bfdff2b5560a03d6cdcb589790c8e5/diff:/var/lib/docker/overlay2/76c287aa1bd3a7c3636e82df1bac8ead485e55
7a0fd68fdbfc0d5655d89f7113/diff:/var/lib/docker/overlay2/a2ab65056651b30980d6df9664f682519df2c2fc604d87ddb2bb2ca25b663d5e/diff:/var/lib/docker/overlay2/3a84daa5ad43dd7c27d884672613e37b8a5bed1fa79edee0e951b2e3fa39f21f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9adb848caec0abc08d027ccd1c35bdff12a3a4793300d083dae45d9fd833c7d4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9adb848caec0abc08d027ccd1c35bdff12a3a4793300d083dae45d9fd833c7d4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9adb848caec0abc08d027ccd1c35bdff12a3a4793300d083dae45d9fd833c7d4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "kubernetes-upgrade-325000",
	                "Source": "/var/lib/docker/volumes/kubernetes-upgrade-325000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "kubernetes-upgrade-325000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "kubernetes-upgrade-325000",
	                "name.minikube.sigs.k8s.io": "kubernetes-upgrade-325000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "89047b2d4bcfd07e06f91627e278d1f8f598317d74397ff3701f4a32ee870680",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "60810"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "60811"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "60812"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "60813"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "60814"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/89047b2d4bcf",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "kubernetes-upgrade-325000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "c7634b4fa32c",
	                        "kubernetes-upgrade-325000"
	                    ],
	                    "NetworkID": "9e558a73c09ed3a505a3910048ffe030c613f77c6b3e9bb989933249146491e3",
	                    "EndpointID": "d9e5aab1ac0125ee0dc759f8d8772aa53ef0d52accd47a9ee4ab0d0bc93c1f97",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p kubernetes-upgrade-325000 -n kubernetes-upgrade-325000
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p kubernetes-upgrade-325000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p kubernetes-upgrade-325000 logs -n 25: (3.18632102s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------|---------------------------|---------|--------------------------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   |         Version          |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|--------------------------|---------------------|---------------------|
	| ssh     | -p custom-flannel-732000 sudo                        | custom-flannel-732000     | jenkins | v1.29.0-1674856271-15565 | 28 Jan 23 11:27 PST | 28 Jan 23 11:27 PST |
	|         | journalctl -xeu kubelet --all                        |                           |         |                          |                     |                     |
	|         | --full --no-pager                                    |                           |         |                          |                     |                     |
	| ssh     | -p custom-flannel-732000                             | custom-flannel-732000     | jenkins | v1.29.0-1674856271-15565 | 28 Jan 23 11:27 PST | 28 Jan 23 11:27 PST |
	|         | sudo cat                                             |                           |         |                          |                     |                     |
	|         | /etc/kubernetes/kubelet.conf                         |                           |         |                          |                     |                     |
	| ssh     | -p custom-flannel-732000                             | custom-flannel-732000     | jenkins | v1.29.0-1674856271-15565 | 28 Jan 23 11:27 PST | 28 Jan 23 11:27 PST |
	|         | sudo cat                                             |                           |         |                          |                     |                     |
	|         | /var/lib/kubelet/config.yaml                         |                           |         |                          |                     |                     |
	| ssh     | -p custom-flannel-732000 sudo                        | custom-flannel-732000     | jenkins | v1.29.0-1674856271-15565 | 28 Jan 23 11:27 PST | 28 Jan 23 11:27 PST |
	|         | systemctl status docker --all                        |                           |         |                          |                     |                     |
	|         | --full --no-pager                                    |                           |         |                          |                     |                     |
	| ssh     | -p custom-flannel-732000                             | custom-flannel-732000     | jenkins | v1.29.0-1674856271-15565 | 28 Jan 23 11:27 PST | 28 Jan 23 11:27 PST |
	|         | sudo systemctl cat docker                            |                           |         |                          |                     |                     |
	|         | --no-pager                                           |                           |         |                          |                     |                     |
	| ssh     | -p custom-flannel-732000 sudo                        | custom-flannel-732000     | jenkins | v1.29.0-1674856271-15565 | 28 Jan 23 11:27 PST | 28 Jan 23 11:27 PST |
	|         | cat /etc/docker/daemon.json                          |                           |         |                          |                     |                     |
	| ssh     | -p custom-flannel-732000 sudo                        | custom-flannel-732000     | jenkins | v1.29.0-1674856271-15565 | 28 Jan 23 11:27 PST | 28 Jan 23 11:27 PST |
	|         | docker system info                                   |                           |         |                          |                     |                     |
	| ssh     | -p custom-flannel-732000 sudo                        | custom-flannel-732000     | jenkins | v1.29.0-1674856271-15565 | 28 Jan 23 11:27 PST | 28 Jan 23 11:27 PST |
	|         | systemctl status cri-docker                          |                           |         |                          |                     |                     |
	|         | --all --full --no-pager                              |                           |         |                          |                     |                     |
	| ssh     | -p custom-flannel-732000                             | custom-flannel-732000     | jenkins | v1.29.0-1674856271-15565 | 28 Jan 23 11:27 PST | 28 Jan 23 11:27 PST |
	|         | sudo systemctl cat cri-docker                        |                           |         |                          |                     |                     |
	|         | --no-pager                                           |                           |         |                          |                     |                     |
	| ssh     | -p custom-flannel-732000 sudo cat                    | custom-flannel-732000     | jenkins | v1.29.0-1674856271-15565 | 28 Jan 23 11:27 PST | 28 Jan 23 11:27 PST |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |                          |                     |                     |
	| ssh     | -p custom-flannel-732000 sudo cat                    | custom-flannel-732000     | jenkins | v1.29.0-1674856271-15565 | 28 Jan 23 11:27 PST | 28 Jan 23 11:27 PST |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |                          |                     |                     |
	| ssh     | -p custom-flannel-732000 sudo                        | custom-flannel-732000     | jenkins | v1.29.0-1674856271-15565 | 28 Jan 23 11:27 PST | 28 Jan 23 11:27 PST |
	|         | cri-dockerd --version                                |                           |         |                          |                     |                     |
	| ssh     | -p custom-flannel-732000 sudo                        | custom-flannel-732000     | jenkins | v1.29.0-1674856271-15565 | 28 Jan 23 11:27 PST | 28 Jan 23 11:27 PST |
	|         | systemctl status containerd                          |                           |         |                          |                     |                     |
	|         | --all --full --no-pager                              |                           |         |                          |                     |                     |
	| ssh     | -p custom-flannel-732000                             | custom-flannel-732000     | jenkins | v1.29.0-1674856271-15565 | 28 Jan 23 11:27 PST | 28 Jan 23 11:27 PST |
	|         | sudo systemctl cat containerd                        |                           |         |                          |                     |                     |
	|         | --no-pager                                           |                           |         |                          |                     |                     |
	| ssh     | -p custom-flannel-732000 sudo cat                    | custom-flannel-732000     | jenkins | v1.29.0-1674856271-15565 | 28 Jan 23 11:27 PST | 28 Jan 23 11:27 PST |
	|         | /lib/systemd/system/containerd.service               |                           |         |                          |                     |                     |
	| ssh     | -p custom-flannel-732000                             | custom-flannel-732000     | jenkins | v1.29.0-1674856271-15565 | 28 Jan 23 11:27 PST | 28 Jan 23 11:27 PST |
	|         | sudo cat                                             |                           |         |                          |                     |                     |
	|         | /etc/containerd/config.toml                          |                           |         |                          |                     |                     |
	| ssh     | -p custom-flannel-732000 sudo                        | custom-flannel-732000     | jenkins | v1.29.0-1674856271-15565 | 28 Jan 23 11:27 PST | 28 Jan 23 11:27 PST |
	|         | containerd config dump                               |                           |         |                          |                     |                     |
	| ssh     | -p custom-flannel-732000 sudo                        | custom-flannel-732000     | jenkins | v1.29.0-1674856271-15565 | 28 Jan 23 11:27 PST |                     |
	|         | systemctl status crio --all                          |                           |         |                          |                     |                     |
	|         | --full --no-pager                                    |                           |         |                          |                     |                     |
	| ssh     | -p custom-flannel-732000 sudo                        | custom-flannel-732000     | jenkins | v1.29.0-1674856271-15565 | 28 Jan 23 11:27 PST | 28 Jan 23 11:27 PST |
	|         | systemctl cat crio --no-pager                        |                           |         |                          |                     |                     |
	| ssh     | -p custom-flannel-732000 sudo                        | custom-flannel-732000     | jenkins | v1.29.0-1674856271-15565 | 28 Jan 23 11:27 PST | 28 Jan 23 11:27 PST |
	|         | find /etc/crio -type f -exec                         |                           |         |                          |                     |                     |
	|         | sh -c 'echo {}; cat {}' \;                           |                           |         |                          |                     |                     |
	| ssh     | -p custom-flannel-732000 sudo                        | custom-flannel-732000     | jenkins | v1.29.0-1674856271-15565 | 28 Jan 23 11:27 PST | 28 Jan 23 11:27 PST |
	|         | crio config                                          |                           |         |                          |                     |                     |
	| delete  | -p custom-flannel-732000                             | custom-flannel-732000     | jenkins | v1.29.0-1674856271-15565 | 28 Jan 23 11:27 PST | 28 Jan 23 11:27 PST |
	| start   | -p kubernetes-upgrade-325000                         | kubernetes-upgrade-325000 | jenkins | v1.29.0-1674856271-15565 | 28 Jan 23 11:27 PST |                     |
	|         | --memory=2200                                        |                           |         |                          |                     |                     |
	|         | --kubernetes-version=v1.16.0                         |                           |         |                          |                     |                     |
	|         | --driver=docker                                      |                           |         |                          |                     |                     |
	| start   | -p kubernetes-upgrade-325000                         | kubernetes-upgrade-325000 | jenkins | v1.29.0-1674856271-15565 | 28 Jan 23 11:27 PST | 28 Jan 23 11:27 PST |
	|         | --memory=2200                                        |                           |         |                          |                     |                     |
	|         | --kubernetes-version=v1.26.1                         |                           |         |                          |                     |                     |
	|         | --alsologtostderr -v=1                               |                           |         |                          |                     |                     |
	|         | --driver=docker                                      |                           |         |                          |                     |                     |
	| start   | -p false-732000 --memory=3072                        | false-732000              | jenkins | v1.29.0-1674856271-15565 | 28 Jan 23 11:27 PST |                     |
	|         | --alsologtostderr --wait=true                        |                           |         |                          |                     |                     |
	|         | --wait-timeout=15m --cni=false                       |                           |         |                          |                     |                     |
	|         | --driver=docker                                      |                           |         |                          |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|--------------------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/01/28 11:27:28
	Running on machine: MacOS-Agent-3
	Binary: Built with gc go1.19.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0128 11:27:28.613472   39720 out.go:296] Setting OutFile to fd 1 ...
	I0128 11:27:28.613660   39720 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0128 11:27:28.613665   39720 out.go:309] Setting ErrFile to fd 2...
	I0128 11:27:28.613671   39720 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0128 11:27:28.613792   39720 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15565-24808/.minikube/bin
	I0128 11:27:28.614326   39720 out.go:303] Setting JSON to false
	I0128 11:27:28.634044   39720 start.go:125] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":8823,"bootTime":1674925225,"procs":400,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0128 11:27:28.634127   39720 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0128 11:27:28.655639   39720 out.go:177] * [false-732000] minikube v1.29.0-1674856271-15565 on Darwin 13.2
	I0128 11:27:28.697566   39720 notify.go:220] Checking for updates...
	I0128 11:27:28.718409   39720 out.go:177]   - MINIKUBE_LOCATION=15565
	I0128 11:27:28.760507   39720 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15565-24808/kubeconfig
	I0128 11:27:28.802533   39720 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0128 11:27:28.844411   39720 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0128 11:27:28.886511   39720 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-24808/.minikube
	I0128 11:27:28.930549   39720 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0128 11:27:28.952732   39720 config.go:180] Loaded profile config "kubernetes-upgrade-325000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0128 11:27:28.952841   39720 driver.go:365] Setting default libvirt URI to qemu:///system
	I0128 11:27:29.020282   39720 docker.go:141] docker version: linux-20.10.22:Docker Desktop 4.16.2 (95914)
	I0128 11:27:29.020429   39720 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0128 11:27:29.178550   39720 info.go:266] docker info: {ID:O4T2:OPZV:SRVN:GAA5:IEOW:TFNR:UPTT:2CKV:GY3E:3NIN:VS7C:3BCB Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:63 OomKillDisable:false NGoroutines:57 SystemTime:2023-01-28 19:27:29.076404264 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0128 11:27:29.200606   39720 out.go:177] * Using the docker driver based on user configuration
	I0128 11:27:29.222373   39720 start.go:296] selected driver: docker
	I0128 11:27:29.222406   39720 start.go:857] validating driver "docker" against <nil>
	I0128 11:27:29.222431   39720 start.go:868] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0128 11:27:29.226056   39720 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0128 11:27:29.380263   39720 info.go:266] docker info: {ID:O4T2:OPZV:SRVN:GAA5:IEOW:TFNR:UPTT:2CKV:GY3E:3NIN:VS7C:3BCB Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:64 OomKillDisable:false NGoroutines:58 SystemTime:2023-01-28 19:27:29.278706422 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0128 11:27:29.380378   39720 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0128 11:27:29.380585   39720 start_flags.go:917] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0128 11:27:29.402046   39720 out.go:177] * Using Docker Desktop driver with root privileges
	I0128 11:27:29.423775   39720 cni.go:84] Creating CNI manager for "false"
	I0128 11:27:29.423787   39720 start_flags.go:319] config:
	{Name:false-732000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:false-732000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: F
eatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0128 11:27:29.465841   39720 out.go:177] * Starting control plane node false-732000 in cluster false-732000
	I0128 11:27:29.486677   39720 cache.go:120] Beginning downloading kic base image for docker with docker
	I0128 11:27:29.507954   39720 out.go:177] * Pulling base image ...
	I0128 11:27:29.550779   39720 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0128 11:27:29.550814   39720 image.go:77] Checking for gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 in local docker daemon
	I0128 11:27:29.550841   39720 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15565-24808/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4
	I0128 11:27:29.550852   39720 cache.go:57] Caching tarball of preloaded images
	I0128 11:27:29.551004   39720 preload.go:174] Found /Users/jenkins/minikube-integration/15565-24808/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0128 11:27:29.551016   39720 cache.go:60] Finished verifying existence of preloaded tar for  v1.26.1 on docker
	I0128 11:27:29.551707   39720 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/false-732000/config.json ...
	I0128 11:27:29.551787   39720 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/false-732000/config.json: {Name:mk9067a4a8be2a03b2e0ef2fd808ff82d7b5f1cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0128 11:27:29.613905   39720 image.go:81] Found gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 in local docker daemon, skipping pull
	I0128 11:27:29.613934   39720 cache.go:143] gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 exists in daemon, skipping load
	I0128 11:27:29.613954   39720 cache.go:193] Successfully downloaded all kic artifacts
	I0128 11:27:29.613998   39720 start.go:364] acquiring machines lock for false-732000: {Name:mk612d49789ee77020f08b96172f340c42ae4c82 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0128 11:27:29.614160   39720 start.go:368] acquired machines lock for "false-732000" in 149.719µs
	I0128 11:27:29.614188   39720 start.go:93] Provisioning new machine with config: &{Name:false-732000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:false-732000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServer
IPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuF
irmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0128 11:27:29.614280   39720 start.go:125] createHost starting for "" (driver="docker")
	I0128 11:27:28.244807   39673 machine.go:88] provisioning docker machine ...
	I0128 11:27:28.244842   39673 ubuntu.go:169] provisioning hostname "kubernetes-upgrade-325000"
	I0128 11:27:28.244909   39673 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-325000
	I0128 11:27:28.307366   39673 main.go:141] libmachine: Using SSH client type: native
	I0128 11:27:28.307586   39673 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 60810 <nil> <nil>}
	I0128 11:27:28.307598   39673 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-325000 && echo "kubernetes-upgrade-325000" | sudo tee /etc/hostname
	I0128 11:27:28.455582   39673 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-325000
	
	I0128 11:27:28.455676   39673 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-325000
	I0128 11:27:28.552664   39673 main.go:141] libmachine: Using SSH client type: native
	I0128 11:27:28.552831   39673 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 60810 <nil> <nil>}
	I0128 11:27:28.552846   39673 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-325000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-325000/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-325000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0128 11:27:28.687902   39673 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0128 11:27:28.687927   39673 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15565-24808/.minikube CaCertPath:/Users/jenkins/minikube-integration/15565-24808/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15565-24808/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15565-24808/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15565-24808/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15565-24808/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15565-24808/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15565-24808/.minikube}
	I0128 11:27:28.687952   39673 ubuntu.go:177] setting up certificates
	I0128 11:27:28.687972   39673 provision.go:83] configureAuth start
	I0128 11:27:28.688047   39673 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-325000
	I0128 11:27:28.879653   39673 provision.go:138] copyHostCerts
	I0128 11:27:28.879823   39673 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-24808/.minikube/ca.pem, removing ...
	I0128 11:27:28.879840   39673 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-24808/.minikube/ca.pem
	I0128 11:27:28.886668   39673 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-24808/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15565-24808/.minikube/ca.pem (1082 bytes)
	I0128 11:27:28.909756   39673 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-24808/.minikube/cert.pem, removing ...
	I0128 11:27:28.909776   39673 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-24808/.minikube/cert.pem
	I0128 11:27:28.910150   39673 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-24808/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15565-24808/.minikube/cert.pem (1123 bytes)
	I0128 11:27:28.930955   39673 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-24808/.minikube/key.pem, removing ...
	I0128 11:27:28.930969   39673 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-24808/.minikube/key.pem
	I0128 11:27:28.951828   39673 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-24808/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15565-24808/.minikube/key.pem (1675 bytes)
	I0128 11:27:28.952361   39673 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15565-24808/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15565-24808/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15565-24808/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-325000 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube kubernetes-upgrade-325000]
	I0128 11:27:29.223156   39673 provision.go:172] copyRemoteCerts
	I0128 11:27:29.223256   39673 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0128 11:27:29.223350   39673 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-325000
	I0128 11:27:29.286336   39673 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60810 SSHKeyPath:/Users/jenkins/minikube-integration/15565-24808/.minikube/machines/kubernetes-upgrade-325000/id_rsa Username:docker}
	I0128 11:27:29.382610   39673 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-24808/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0128 11:27:29.400895   39673 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-24808/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0128 11:27:29.419042   39673 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-24808/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0128 11:27:29.438292   39673 provision.go:86] duration metric: configureAuth took 750.305798ms
	I0128 11:27:29.438306   39673 ubuntu.go:193] setting minikube options for container-runtime
	I0128 11:27:29.438498   39673 config.go:180] Loaded profile config "kubernetes-upgrade-325000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0128 11:27:29.438577   39673 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-325000
	I0128 11:27:29.560896   39673 main.go:141] libmachine: Using SSH client type: native
	I0128 11:27:29.561062   39673 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 60810 <nil> <nil>}
	I0128 11:27:29.561073   39673 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0128 11:27:29.698141   39673 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0128 11:27:29.698158   39673 ubuntu.go:71] root file system type: overlay
	I0128 11:27:29.698327   39673 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0128 11:27:29.698432   39673 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-325000
	I0128 11:27:29.763751   39673 main.go:141] libmachine: Using SSH client type: native
	I0128 11:27:29.763927   39673 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 60810 <nil> <nil>}
	I0128 11:27:29.763997   39673 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0128 11:27:29.912821   39673 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0128 11:27:29.912920   39673 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-325000
	I0128 11:27:29.977396   39673 main.go:141] libmachine: Using SSH client type: native
	I0128 11:27:29.977597   39673 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 60810 <nil> <nil>}
	I0128 11:27:29.977612   39673 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0128 11:27:30.118581   39673 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0128 11:27:30.118599   39673 machine.go:91] provisioned docker machine in 1.873774145s
	I0128 11:27:30.118609   39673 start.go:300] post-start starting for "kubernetes-upgrade-325000" (driver="docker")
	I0128 11:27:30.118616   39673 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0128 11:27:30.118699   39673 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0128 11:27:30.118765   39673 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-325000
	I0128 11:27:30.188624   39673 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60810 SSHKeyPath:/Users/jenkins/minikube-integration/15565-24808/.minikube/machines/kubernetes-upgrade-325000/id_rsa Username:docker}
	I0128 11:27:30.287331   39673 ssh_runner.go:195] Run: cat /etc/os-release
	I0128 11:27:30.291676   39673 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0128 11:27:30.291697   39673 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0128 11:27:30.291708   39673 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0128 11:27:30.291715   39673 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0128 11:27:30.291726   39673 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15565-24808/.minikube/addons for local assets ...
	I0128 11:27:30.291836   39673 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15565-24808/.minikube/files for local assets ...
	I0128 11:27:30.292073   39673 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15565-24808/.minikube/files/etc/ssl/certs/259822.pem -> 259822.pem in /etc/ssl/certs
	I0128 11:27:30.292301   39673 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0128 11:27:30.300590   39673 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-24808/.minikube/files/etc/ssl/certs/259822.pem --> /etc/ssl/certs/259822.pem (1708 bytes)
	I0128 11:27:30.322629   39673 start.go:303] post-start completed in 204.005249ms
	I0128 11:27:30.322728   39673 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0128 11:27:30.322810   39673 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-325000
	I0128 11:27:30.392894   39673 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60810 SSHKeyPath:/Users/jenkins/minikube-integration/15565-24808/.minikube/machines/kubernetes-upgrade-325000/id_rsa Username:docker}
	I0128 11:27:30.485152   39673 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0128 11:27:30.490339   39673 fix.go:57] fixHost completed within 2.353717577s
	I0128 11:27:30.490353   39673 start.go:83] releasing machines lock for "kubernetes-upgrade-325000", held for 2.353755977s
	I0128 11:27:30.490430   39673 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-325000
	I0128 11:27:30.557364   39673 ssh_runner.go:195] Run: cat /version.json
	I0128 11:27:30.557408   39673 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0128 11:27:30.557448   39673 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-325000
	I0128 11:27:30.557553   39673 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-325000
	I0128 11:27:30.628244   39673 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60810 SSHKeyPath:/Users/jenkins/minikube-integration/15565-24808/.minikube/machines/kubernetes-upgrade-325000/id_rsa Username:docker}
	I0128 11:27:30.628803   39673 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60810 SSHKeyPath:/Users/jenkins/minikube-integration/15565-24808/.minikube/machines/kubernetes-upgrade-325000/id_rsa Username:docker}
	W0128 11:27:30.788431   39673 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.29.0 -> Actual minikube version: v1.29.0-1674856271-15565
	I0128 11:27:30.788527   39673 ssh_runner.go:195] Run: systemctl --version
	I0128 11:27:30.807911   39673 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0128 11:27:30.813795   39673 cni.go:208] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0128 11:27:30.813940   39673 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0128 11:27:30.822523   39673 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
	I0128 11:27:30.838633   39673 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0128 11:27:30.847448   39673 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0128 11:27:30.856336   39673 cni.go:304] no active bridge cni configs found in "/etc/cni/net.d" - nothing to configure
	I0128 11:27:30.856355   39673 start.go:483] detecting cgroup driver to use...
	I0128 11:27:30.856368   39673 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0128 11:27:30.856467   39673 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0128 11:27:30.871320   39673 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0128 11:27:30.882355   39673 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0128 11:27:30.893426   39673 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0128 11:27:30.893492   39673 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0128 11:27:30.903860   39673 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0128 11:27:30.914799   39673 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0128 11:27:30.925388   39673 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0128 11:27:30.936861   39673 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0128 11:27:30.946261   39673 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0128 11:27:30.956018   39673 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0128 11:27:30.964882   39673 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0128 11:27:30.977136   39673 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0128 11:27:31.093492   39673 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0128 11:27:31.786287   39673 start.go:483] detecting cgroup driver to use...
	I0128 11:27:31.786315   39673 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0128 11:27:31.786413   39673 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0128 11:27:31.804373   39673 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0128 11:27:31.804464   39673 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0128 11:27:31.818969   39673 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0128 11:27:31.841534   39673 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0128 11:27:31.955740   39673 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0128 11:27:29.637797   39720 out.go:204] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0128 11:27:29.638012   39720 start.go:159] libmachine.API.Create for "false-732000" (driver="docker")
	I0128 11:27:29.638040   39720 client.go:168] LocalClient.Create starting
	I0128 11:27:29.638145   39720 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15565-24808/.minikube/certs/ca.pem
	I0128 11:27:29.638185   39720 main.go:141] libmachine: Decoding PEM data...
	I0128 11:27:29.638199   39720 main.go:141] libmachine: Parsing certificate...
	I0128 11:27:29.638258   39720 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15565-24808/.minikube/certs/cert.pem
	I0128 11:27:29.638284   39720 main.go:141] libmachine: Decoding PEM data...
	I0128 11:27:29.638296   39720 main.go:141] libmachine: Parsing certificate...
	I0128 11:27:29.659018   39720 cli_runner.go:164] Run: docker network inspect false-732000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0128 11:27:29.717403   39720 cli_runner.go:211] docker network inspect false-732000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0128 11:27:29.717502   39720 network_create.go:281] running [docker network inspect false-732000] to gather additional debugging logs...
	I0128 11:27:29.717518   39720 cli_runner.go:164] Run: docker network inspect false-732000
	W0128 11:27:29.779947   39720 cli_runner.go:211] docker network inspect false-732000 returned with exit code 1
	I0128 11:27:29.779983   39720 network_create.go:284] error running [docker network inspect false-732000]: docker network inspect false-732000: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: false-732000
	I0128 11:27:29.780004   39720 network_create.go:286] output of [docker network inspect false-732000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: false-732000
	
	** /stderr **
	I0128 11:27:29.780099   39720 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0128 11:27:29.843111   39720 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0128 11:27:29.843420   39720 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0011df120}
	I0128 11:27:29.843433   39720 network_create.go:123] attempt to create docker network false-732000 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0128 11:27:29.843495   39720 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=false-732000 false-732000
	W0128 11:27:29.901332   39720 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=false-732000 false-732000 returned with exit code 1
	W0128 11:27:29.901367   39720 network_create.go:148] failed to create docker network false-732000 192.168.58.0/24 with gateway 192.168.58.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=false-732000 false-732000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0128 11:27:29.901391   39720 network_create.go:115] failed to create docker network false-732000 192.168.58.0/24, will retry: subnet is taken
	I0128 11:27:29.902718   39720 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0128 11:27:29.903084   39720 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0010f6d20}
	I0128 11:27:29.903095   39720 network_create.go:123] attempt to create docker network false-732000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0128 11:27:29.903169   39720 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=false-732000 false-732000
	I0128 11:27:30.008174   39720 network_create.go:107] docker network false-732000 192.168.67.0/24 created
	I0128 11:27:30.008206   39720 kic.go:117] calculated static IP "192.168.67.2" for the "false-732000" container
	I0128 11:27:30.008330   39720 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0128 11:27:30.068380   39720 cli_runner.go:164] Run: docker volume create false-732000 --label name.minikube.sigs.k8s.io=false-732000 --label created_by.minikube.sigs.k8s.io=true
	I0128 11:27:30.127175   39720 oci.go:103] Successfully created a docker volume false-732000
	I0128 11:27:30.127305   39720 cli_runner.go:164] Run: docker run --rm --name false-732000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=false-732000 --entrypoint /usr/bin/test -v false-732000:/var gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 -d /var/lib
	I0128 11:27:30.642220   39720 oci.go:107] Successfully prepared a docker volume false-732000
	I0128 11:27:30.642264   39720 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0128 11:27:30.642280   39720 kic.go:190] Starting extracting preloaded images to volume ...
	I0128 11:27:30.642412   39720 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15565-24808/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v false-732000:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 -I lz4 -xf /preloaded.tar -C /extractDir
	I0128 11:27:32.066578   39673 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0128 11:27:32.066597   39673 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0128 11:27:32.086613   39673 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0128 11:27:32.216626   39673 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0128 11:27:32.743541   39673 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0128 11:27:32.848205   39673 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0128 11:27:33.019792   39673 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0128 11:27:33.237954   39673 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0128 11:27:33.377579   39673 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0128 11:27:33.430326   39673 start.go:530] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0128 11:27:33.430458   39673 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0128 11:27:33.436653   39673 start.go:551] Will wait 60s for crictl version
	I0128 11:27:33.436784   39673 ssh_runner.go:195] Run: which crictl
	I0128 11:27:33.443802   39673 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0128 11:27:33.849179   39673 start.go:567] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.23
	RuntimeApiVersion:  v1alpha2
	I0128 11:27:33.849288   39673 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0128 11:27:34.033015   39673 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0128 11:27:34.163983   39673 out.go:204] * Preparing Kubernetes v1.26.1 on Docker 20.10.23 ...
	I0128 11:27:34.164209   39673 cli_runner.go:164] Run: docker exec -t kubernetes-upgrade-325000 dig +short host.docker.internal
	I0128 11:27:34.358513   39673 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0128 11:27:34.358651   39673 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0128 11:27:34.408356   39673 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-325000
	I0128 11:27:34.495392   39673 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0128 11:27:34.495474   39673 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0128 11:27:34.620810   39673 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	registry.k8s.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0128 11:27:34.620846   39673 docker.go:560] Images already preloaded, skipping extraction
	I0128 11:27:34.620993   39673 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0128 11:27:34.710349   39673 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	registry.k8s.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0128 11:27:34.710383   39673 cache_images.go:84] Images are preloaded, skipping loading
	I0128 11:27:34.710485   39673 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0128 11:27:35.005826   39673 cni.go:84] Creating CNI manager for ""
	I0128 11:27:35.005845   39673 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0128 11:27:35.005868   39673 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0128 11:27:35.005905   39673 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.26.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-325000 NodeName:kubernetes-upgrade-325000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0128 11:27:35.006032   39673 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "kubernetes-upgrade-325000"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.26.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0128 11:27:35.006116   39673 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.26.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=kubernetes-upgrade-325000 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.26.1 ClusterName:kubernetes-upgrade-325000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0128 11:27:35.006183   39673 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.1
	I0128 11:27:35.020293   39673 binaries.go:44] Found k8s binaries, skipping transfer
	I0128 11:27:35.020403   39673 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0128 11:27:35.035989   39673 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (457 bytes)
	I0128 11:27:35.057678   39673 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0128 11:27:35.121350   39673 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2101 bytes)
	I0128 11:27:35.152192   39673 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0128 11:27:35.160425   39673 certs.go:56] Setting up /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/kubernetes-upgrade-325000 for IP: 192.168.76.2
	I0128 11:27:35.160451   39673 certs.go:186] acquiring lock for shared ca certs: {Name:mk223e4eab41546e140aa3e3e480564c04fddab4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0128 11:27:35.160737   39673 certs.go:195] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15565-24808/.minikube/ca.key
	I0128 11:27:35.160870   39673 certs.go:195] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15565-24808/.minikube/proxy-client-ca.key
	I0128 11:27:35.161021   39673 certs.go:311] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/kubernetes-upgrade-325000/client.key
	I0128 11:27:35.161207   39673 certs.go:311] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/kubernetes-upgrade-325000/apiserver.key.31bdca25
	I0128 11:27:35.161330   39673 certs.go:311] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/kubernetes-upgrade-325000/proxy-client.key
	I0128 11:27:35.161724   39673 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-24808/.minikube/certs/Users/jenkins/minikube-integration/15565-24808/.minikube/certs/25982.pem (1338 bytes)
	W0128 11:27:35.161802   39673 certs.go:397] ignoring /Users/jenkins/minikube-integration/15565-24808/.minikube/certs/Users/jenkins/minikube-integration/15565-24808/.minikube/certs/25982_empty.pem, impossibly tiny 0 bytes
	I0128 11:27:35.161824   39673 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-24808/.minikube/certs/Users/jenkins/minikube-integration/15565-24808/.minikube/certs/ca-key.pem (1675 bytes)
	I0128 11:27:35.161895   39673 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-24808/.minikube/certs/Users/jenkins/minikube-integration/15565-24808/.minikube/certs/ca.pem (1082 bytes)
	I0128 11:27:35.161992   39673 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-24808/.minikube/certs/Users/jenkins/minikube-integration/15565-24808/.minikube/certs/cert.pem (1123 bytes)
	I0128 11:27:35.162050   39673 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-24808/.minikube/certs/Users/jenkins/minikube-integration/15565-24808/.minikube/certs/key.pem (1675 bytes)
	I0128 11:27:35.162183   39673 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-24808/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15565-24808/.minikube/files/etc/ssl/certs/259822.pem (1708 bytes)
	I0128 11:27:35.163169   39673 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/kubernetes-upgrade-325000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0128 11:27:35.227285   39673 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/kubernetes-upgrade-325000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0128 11:27:35.267663   39673 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/kubernetes-upgrade-325000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0128 11:27:35.340134   39673 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/kubernetes-upgrade-325000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0128 11:27:35.408602   39673 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-24808/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0128 11:27:35.446227   39673 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-24808/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0128 11:27:35.509540   39673 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-24808/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0128 11:27:35.543426   39673 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-24808/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0128 11:27:35.615720   39673 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-24808/.minikube/certs/25982.pem --> /usr/share/ca-certificates/25982.pem (1338 bytes)
	I0128 11:27:35.720276   39673 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-24808/.minikube/files/etc/ssl/certs/259822.pem --> /usr/share/ca-certificates/259822.pem (1708 bytes)
	I0128 11:27:35.754744   39673 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-24808/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0128 11:27:35.785175   39673 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (772 bytes)
	I0128 11:27:35.827742   39673 ssh_runner.go:195] Run: openssl version
	I0128 11:27:35.835902   39673 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0128 11:27:35.849628   39673 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0128 11:27:35.857539   39673 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Jan 28 18:40 /usr/share/ca-certificates/minikubeCA.pem
	I0128 11:27:35.857606   39673 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0128 11:27:35.869339   39673 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0128 11:27:35.880512   39673 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/25982.pem && ln -fs /usr/share/ca-certificates/25982.pem /etc/ssl/certs/25982.pem"
	I0128 11:27:35.908330   39673 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/25982.pem
	I0128 11:27:35.915768   39673 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Jan 28 18:44 /usr/share/ca-certificates/25982.pem
	I0128 11:27:35.915868   39673 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/25982.pem
	I0128 11:27:35.925556   39673 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/25982.pem /etc/ssl/certs/51391683.0"
	I0128 11:27:35.936808   39673 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/259822.pem && ln -fs /usr/share/ca-certificates/259822.pem /etc/ssl/certs/259822.pem"
	I0128 11:27:35.947523   39673 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/259822.pem
	I0128 11:27:35.957540   39673 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Jan 28 18:44 /usr/share/ca-certificates/259822.pem
	I0128 11:27:35.957648   39673 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/259822.pem
	I0128 11:27:35.967288   39673 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/259822.pem /etc/ssl/certs/3ec20f2e.0"
	I0128 11:27:35.979509   39673 kubeadm.go:401] StartCluster: {Name:kubernetes-upgrade-325000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:kubernetes-upgrade-325000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0128 11:27:35.979683   39673 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0128 11:27:36.027502   39673 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0128 11:27:36.038521   39673 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I0128 11:27:36.038540   39673 kubeadm.go:633] restartCluster start
	I0128 11:27:36.038612   39673 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0128 11:27:36.051553   39673 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:27:36.051659   39673 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-325000
	I0128 11:27:36.138090   39673 kubeconfig.go:92] found "kubernetes-upgrade-325000" server: "https://127.0.0.1:60814"
	I0128 11:27:36.138778   39673 kapi.go:59] client config for kubernetes-upgrade-325000: &rest.Config{Host:"https://127.0.0.1:60814", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/kubernetes-upgrade-325000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/kubernetes-upgrade-325000/client.key", CAFile:"/Users/jenkins/minikube-integration/15565-24808/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil)
, CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2449fa0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0128 11:27:36.139410   39673 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0128 11:27:36.156346   39673 api_server.go:165] Checking apiserver status ...
	I0128 11:27:36.156435   39673 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:27:36.173950   39673 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/12021/cgroup
	W0128 11:27:36.187733   39673 api_server.go:176] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/12021/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:27:36.187813   39673 ssh_runner.go:195] Run: ls
	I0128 11:27:36.196095   39673 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:60814/healthz ...
	I0128 11:27:36.760516   39673 api_server.go:278] https://127.0.0.1:60814/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0128 11:27:36.760604   39673 retry.go:31] will retry after 263.082536ms: https://127.0.0.1:60814/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0128 11:27:37.023928   39673 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:60814/healthz ...
	I0128 11:27:37.068397   39673 api_server.go:278] https://127.0.0.1:60814/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0128 11:27:37.068448   39673 retry.go:31] will retry after 381.329545ms: https://127.0.0.1:60814/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0128 11:27:37.449918   39673 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:60814/healthz ...
	I0128 11:27:37.455939   39673 api_server.go:278] https://127.0.0.1:60814/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0128 11:27:37.455965   39673 retry.go:31] will retry after 422.765636ms: https://127.0.0.1:60814/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0128 11:27:37.879832   39673 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:60814/healthz ...
	I0128 11:27:37.885063   39673 api_server.go:278] https://127.0.0.1:60814/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0128 11:27:37.885086   39673 retry.go:31] will retry after 473.074753ms: https://127.0.0.1:60814/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0128 11:27:38.358247   39673 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:60814/healthz ...
	I0128 11:27:38.364223   39673 api_server.go:278] https://127.0.0.1:60814/healthz returned 200:
	ok
	I0128 11:27:38.376858   39673 system_pods.go:86] 5 kube-system pods found
	I0128 11:27:38.376882   39673 system_pods.go:89] "etcd-kubernetes-upgrade-325000" [15bb89ae-87ef-422d-8f0e-7e96690dcd7a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0128 11:27:38.376890   39673 system_pods.go:89] "kube-apiserver-kubernetes-upgrade-325000" [289d838c-f2c6-4317-b216-4cf6f601acb2] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0128 11:27:38.376898   39673 system_pods.go:89] "kube-controller-manager-kubernetes-upgrade-325000" [4b54c7da-d000-41ca-b973-5258894ba5d0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0128 11:27:38.376904   39673 system_pods.go:89] "kube-scheduler-kubernetes-upgrade-325000" [e7c12edf-8fdf-45a2-b985-c4f403dcf3d0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0128 11:27:38.376909   39673 system_pods.go:89] "storage-provisioner" [0baf3956-ec5d-4cd3-9332-0dc220beb3a7] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling..)
	I0128 11:27:38.376915   39673 kubeadm.go:617] needs reconfigure: missing components: kube-dns, kube-proxy
	I0128 11:27:38.376923   39673 kubeadm.go:1120] stopping kube-system containers ...
	I0128 11:27:38.377003   39673 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0128 11:27:38.407230   39673 docker.go:456] Stopping containers: [55e66d04fbff b2b63d7a8766 56b00741d00d 17c2cc1d9195 71a9f6510af0 cbe2c2b6815b 4c1b10ece32c 6ae6fd339986 a2a47c942af0 ea80410b35b7 f2996e10886c d22efb00aecd 430e4b8dac1f 17e22339c6f1 5242f258ff98 eec4d64abaf9 aa6d57cc476e]
	I0128 11:27:38.407327   39673 ssh_runner.go:195] Run: docker stop 55e66d04fbff b2b63d7a8766 56b00741d00d 17c2cc1d9195 71a9f6510af0 cbe2c2b6815b 4c1b10ece32c 6ae6fd339986 a2a47c942af0 ea80410b35b7 f2996e10886c d22efb00aecd 430e4b8dac1f 17e22339c6f1 5242f258ff98 eec4d64abaf9 aa6d57cc476e
	I0128 11:27:39.216739   39673 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0128 11:27:39.307549   39673 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0128 11:27:39.322387   39673 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Jan 28 19:27 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Jan 28 19:27 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2039 Jan 28 19:27 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Jan 28 19:27 /etc/kubernetes/scheduler.conf
	
	I0128 11:27:39.322497   39673 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0128 11:27:39.341672   39673 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0128 11:27:39.356358   39673 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0128 11:27:39.368458   39673 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:27:39.368536   39673 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0128 11:27:39.380079   39673 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0128 11:27:39.410499   39673 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:27:39.410581   39673 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0128 11:27:39.423881   39673 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0128 11:27:39.437846   39673 kubeadm.go:710] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0128 11:27:39.437864   39673 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0128 11:27:39.508493   39673 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0128 11:27:40.344169   39673 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0128 11:27:40.492437   39673 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0128 11:27:40.561194   39673 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0128 11:27:40.718153   39673 api_server.go:51] waiting for apiserver process to appear ...
	I0128 11:27:40.718233   39673 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:27:41.229920   39673 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:27:41.729993   39673 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:27:41.742435   39673 api_server.go:71] duration metric: took 1.024283214s to wait for apiserver process to appear ...
	I0128 11:27:41.742458   39673 api_server.go:87] waiting for apiserver healthz status ...
	I0128 11:27:41.742472   39673 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:60814/healthz ...
	I0128 11:27:38.679289   39720 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15565-24808/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v false-732000:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 -I lz4 -xf /preloaded.tar -C /extractDir: (8.036799878s)
	I0128 11:27:38.679309   39720 kic.go:199] duration metric: took 8.037011 seconds to extract preloaded images to volume
	I0128 11:27:38.679430   39720 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0128 11:27:38.830794   39720 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname false-732000 --name false-732000 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=false-732000 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=false-732000 --network false-732000 --ip 192.168.67.2 --volume false-732000:/var --security-opt apparmor=unconfined --memory=3072mb --memory-swap=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15
	I0128 11:27:39.221645   39720 cli_runner.go:164] Run: docker container inspect false-732000 --format={{.State.Running}}
	I0128 11:27:39.296324   39720 cli_runner.go:164] Run: docker container inspect false-732000 --format={{.State.Status}}
	I0128 11:27:39.379943   39720 cli_runner.go:164] Run: docker exec false-732000 stat /var/lib/dpkg/alternatives/iptables
	I0128 11:27:39.504800   39720 oci.go:144] the created container "false-732000" has a running status.
	I0128 11:27:39.504839   39720 kic.go:221] Creating ssh key for kic: /Users/jenkins/minikube-integration/15565-24808/.minikube/machines/false-732000/id_rsa...
	I0128 11:27:39.589730   39720 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/15565-24808/.minikube/machines/false-732000/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0128 11:27:39.715043   39720 cli_runner.go:164] Run: docker container inspect false-732000 --format={{.State.Status}}
	I0128 11:27:39.784329   39720 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0128 11:27:39.784350   39720 kic_runner.go:114] Args: [docker exec --privileged false-732000 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0128 11:27:39.904199   39720 cli_runner.go:164] Run: docker container inspect false-732000 --format={{.State.Status}}
	I0128 11:27:39.968846   39720 machine.go:88] provisioning docker machine ...
	I0128 11:27:39.968889   39720 ubuntu.go:169] provisioning hostname "false-732000"
	I0128 11:27:39.969008   39720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-732000
	I0128 11:27:40.035757   39720 main.go:141] libmachine: Using SSH client type: native
	I0128 11:27:40.035953   39720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 61470 <nil> <nil>}
	I0128 11:27:40.035966   39720 main.go:141] libmachine: About to run SSH command:
	sudo hostname false-732000 && echo "false-732000" | sudo tee /etc/hostname
	I0128 11:27:40.178688   39720 main.go:141] libmachine: SSH cmd err, output: <nil>: false-732000
	
	I0128 11:27:40.178797   39720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-732000
	I0128 11:27:40.241328   39720 main.go:141] libmachine: Using SSH client type: native
	I0128 11:27:40.241493   39720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 61470 <nil> <nil>}
	I0128 11:27:40.241505   39720 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfalse-732000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 false-732000/g' /etc/hosts;
				else 
					echo '127.0.1.1 false-732000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0128 11:27:40.374909   39720 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0128 11:27:40.374934   39720 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15565-24808/.minikube CaCertPath:/Users/jenkins/minikube-integration/15565-24808/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15565-24808/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15565-24808/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15565-24808/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15565-24808/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15565-24808/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15565-24808/.minikube}
	I0128 11:27:40.374953   39720 ubuntu.go:177] setting up certificates
	I0128 11:27:40.374962   39720 provision.go:83] configureAuth start
	I0128 11:27:40.375047   39720 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" false-732000
	I0128 11:27:40.440134   39720 provision.go:138] copyHostCerts
	I0128 11:27:40.440242   39720 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-24808/.minikube/ca.pem, removing ...
	I0128 11:27:40.440262   39720 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-24808/.minikube/ca.pem
	I0128 11:27:40.440385   39720 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-24808/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15565-24808/.minikube/ca.pem (1082 bytes)
	I0128 11:27:40.440599   39720 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-24808/.minikube/cert.pem, removing ...
	I0128 11:27:40.440606   39720 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-24808/.minikube/cert.pem
	I0128 11:27:40.440671   39720 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-24808/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15565-24808/.minikube/cert.pem (1123 bytes)
	I0128 11:27:40.440860   39720 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-24808/.minikube/key.pem, removing ...
	I0128 11:27:40.440866   39720 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-24808/.minikube/key.pem
	I0128 11:27:40.440934   39720 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-24808/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15565-24808/.minikube/key.pem (1675 bytes)
	I0128 11:27:40.441071   39720 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15565-24808/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15565-24808/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15565-24808/.minikube/certs/ca-key.pem org=jenkins.false-732000 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube false-732000]
	I0128 11:27:40.505025   39720 provision.go:172] copyRemoteCerts
	I0128 11:27:40.505093   39720 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0128 11:27:40.505152   39720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-732000
	I0128 11:27:40.572772   39720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61470 SSHKeyPath:/Users/jenkins/minikube-integration/15565-24808/.minikube/machines/false-732000/id_rsa Username:docker}
	I0128 11:27:40.667093   39720 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-24808/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0128 11:27:40.684789   39720 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-24808/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I0128 11:27:40.702954   39720 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-24808/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0128 11:27:40.725272   39720 provision.go:86] duration metric: configureAuth took 350.293234ms
	I0128 11:27:40.725291   39720 ubuntu.go:193] setting minikube options for container-runtime
	I0128 11:27:40.725505   39720 config.go:180] Loaded profile config "false-732000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0128 11:27:40.725583   39720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-732000
	I0128 11:27:40.792654   39720 main.go:141] libmachine: Using SSH client type: native
	I0128 11:27:40.792827   39720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 61470 <nil> <nil>}
	I0128 11:27:40.792840   39720 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0128 11:27:40.927183   39720 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0128 11:27:40.927197   39720 ubuntu.go:71] root file system type: overlay
	I0128 11:27:40.927360   39720 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0128 11:27:40.927440   39720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-732000
	I0128 11:27:41.035404   39720 main.go:141] libmachine: Using SSH client type: native
	I0128 11:27:41.035617   39720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 61470 <nil> <nil>}
	I0128 11:27:41.035689   39720 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0128 11:27:41.178843   39720 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0128 11:27:41.178975   39720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-732000
	I0128 11:27:41.250117   39720 main.go:141] libmachine: Using SSH client type: native
	I0128 11:27:41.250295   39720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 61470 <nil> <nil>}
	I0128 11:27:41.250310   39720 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0128 11:27:41.956857   39720 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2023-01-19 17:34:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-01-28 19:27:41.176446019 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0128 11:27:41.956887   39720 machine.go:91] provisioned docker machine in 1.988016439s
	I0128 11:27:41.956893   39720 client.go:171] LocalClient.Create took 12.318818657s
	I0128 11:27:41.956916   39720 start.go:167] duration metric: libmachine.API.Create for "false-732000" took 12.318871749s
	I0128 11:27:41.956923   39720 start.go:300] post-start starting for "false-732000" (driver="docker")
	I0128 11:27:41.956927   39720 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0128 11:27:41.957013   39720 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0128 11:27:41.957097   39720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-732000
	I0128 11:27:42.020547   39720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61470 SSHKeyPath:/Users/jenkins/minikube-integration/15565-24808/.minikube/machines/false-732000/id_rsa Username:docker}
	I0128 11:27:42.117762   39720 ssh_runner.go:195] Run: cat /etc/os-release
	I0128 11:27:42.122146   39720 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0128 11:27:42.122170   39720 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0128 11:27:42.122178   39720 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0128 11:27:42.122189   39720 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0128 11:27:42.122200   39720 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15565-24808/.minikube/addons for local assets ...
	I0128 11:27:42.122307   39720 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15565-24808/.minikube/files for local assets ...
	I0128 11:27:42.122467   39720 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15565-24808/.minikube/files/etc/ssl/certs/259822.pem -> 259822.pem in /etc/ssl/certs
	I0128 11:27:42.122642   39720 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0128 11:27:42.131297   39720 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-24808/.minikube/files/etc/ssl/certs/259822.pem --> /etc/ssl/certs/259822.pem (1708 bytes)
	I0128 11:27:42.152458   39720 start.go:303] post-start completed in 195.524575ms
	I0128 11:27:42.153094   39720 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" false-732000
	I0128 11:27:42.213962   39720 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/false-732000/config.json ...
	I0128 11:27:42.214437   39720 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0128 11:27:42.214497   39720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-732000
	I0128 11:27:42.276645   39720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61470 SSHKeyPath:/Users/jenkins/minikube-integration/15565-24808/.minikube/machines/false-732000/id_rsa Username:docker}
	I0128 11:27:42.368499   39720 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0128 11:27:42.374013   39720 start.go:128] duration metric: createHost completed in 12.759689174s
	I0128 11:27:42.374042   39720 start.go:83] releasing machines lock for "false-732000", held for 12.759833067s
	I0128 11:27:42.374126   39720 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" false-732000
	I0128 11:27:42.435747   39720 ssh_runner.go:195] Run: cat /version.json
	I0128 11:27:42.435780   39720 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0128 11:27:42.435813   39720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-732000
	I0128 11:27:42.435842   39720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-732000
	I0128 11:27:42.501309   39720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61470 SSHKeyPath:/Users/jenkins/minikube-integration/15565-24808/.minikube/machines/false-732000/id_rsa Username:docker}
	I0128 11:27:42.501425   39720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61470 SSHKeyPath:/Users/jenkins/minikube-integration/15565-24808/.minikube/machines/false-732000/id_rsa Username:docker}
	W0128 11:27:42.645121   39720 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.29.0 -> Actual minikube version: v1.29.0-1674856271-15565
	I0128 11:27:42.645193   39720 ssh_runner.go:195] Run: systemctl --version
	I0128 11:27:42.650137   39720 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0128 11:27:42.664306   39720 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0128 11:27:42.684829   39720 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0128 11:27:42.684938   39720 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0128 11:27:42.692593   39720 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
	I0128 11:27:42.705817   39720 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0128 11:27:42.719797   39720 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0128 11:27:42.727815   39720 cni.go:307] configured [/etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0128 11:27:42.727830   39720 start.go:483] detecting cgroup driver to use...
	I0128 11:27:42.727840   39720 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0128 11:27:42.727942   39720 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0128 11:27:42.740882   39720 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0128 11:27:42.749383   39720 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0128 11:27:42.757750   39720 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0128 11:27:42.757801   39720 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0128 11:27:42.766235   39720 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0128 11:27:42.774670   39720 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0128 11:27:42.783222   39720 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0128 11:27:42.791796   39720 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0128 11:27:42.799718   39720 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0128 11:27:42.808129   39720 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0128 11:27:42.815365   39720 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0128 11:27:42.822608   39720 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0128 11:27:42.895543   39720 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0128 11:27:42.987894   39720 start.go:483] detecting cgroup driver to use...
	I0128 11:27:42.987913   39720 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0128 11:27:42.988000   39720 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0128 11:27:43.001105   39720 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0128 11:27:43.001186   39720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0128 11:27:43.014084   39720 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0128 11:27:43.031390   39720 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0128 11:27:43.130276   39720 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0128 11:27:43.221740   39720 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0128 11:27:43.221775   39720 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0128 11:27:43.240499   39720 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0128 11:27:43.325525   39720 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0128 11:27:43.570565   39720 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0128 11:27:43.653757   39720 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0128 11:27:43.736247   39720 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0128 11:27:43.811537   39720 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0128 11:27:43.881562   39720 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0128 11:27:43.894319   39720 start.go:530] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0128 11:27:43.894416   39720 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0128 11:27:43.899478   39720 start.go:551] Will wait 60s for crictl version
	I0128 11:27:43.899539   39720 ssh_runner.go:195] Run: which crictl
	I0128 11:27:43.903720   39720 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0128 11:27:44.034346   39720 start.go:567] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.23
	RuntimeApiVersion:  v1alpha2
	I0128 11:27:44.034432   39720 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0128 11:27:44.065343   39720 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0128 11:27:44.120219   39720 out.go:204] * Preparing Kubernetes v1.26.1 on Docker 20.10.23 ...
	I0128 11:27:44.120351   39720 cli_runner.go:164] Run: docker exec -t false-732000 dig +short host.docker.internal
	I0128 11:27:44.247983   39720 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0128 11:27:44.248147   39720 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0128 11:27:44.253412   39720 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0128 11:27:44.263984   39720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" false-732000
	I0128 11:27:44.325488   39720 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0128 11:27:44.325565   39720 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0128 11:27:44.351241   39720 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0128 11:27:44.351257   39720 docker.go:560] Images already preloaded, skipping extraction
	I0128 11:27:44.351348   39720 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0128 11:27:44.376318   39720 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0128 11:27:44.376337   39720 cache_images.go:84] Images are preloaded, skipping loading
	I0128 11:27:44.376421   39720 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0128 11:27:44.447757   39720 cni.go:84] Creating CNI manager for "false"
	I0128 11:27:44.447780   39720 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0128 11:27:44.447796   39720 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.26.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:false-732000 NodeName:false-732000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0128 11:27:44.447917   39720 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "false-732000"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.26.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0128 11:27:44.448000   39720 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.26.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=false-732000 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.26.1 ClusterName:false-732000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:}
	I0128 11:27:44.448065   39720 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.1
	I0128 11:27:44.456545   39720 binaries.go:44] Found k8s binaries, skipping transfer
	I0128 11:27:44.456612   39720 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0128 11:27:44.464394   39720 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (444 bytes)
	I0128 11:27:44.478069   39720 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0128 11:27:44.491487   39720 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2088 bytes)
	I0128 11:27:44.505079   39720 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0128 11:27:44.509028   39720 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0128 11:27:44.519046   39720 certs.go:56] Setting up /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/false-732000 for IP: 192.168.67.2
	I0128 11:27:44.519065   39720 certs.go:186] acquiring lock for shared ca certs: {Name:mk223e4eab41546e140aa3e3e480564c04fddab4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0128 11:27:44.519314   39720 certs.go:195] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15565-24808/.minikube/ca.key
	I0128 11:27:44.519385   39720 certs.go:195] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15565-24808/.minikube/proxy-client-ca.key
	I0128 11:27:44.519427   39720 certs.go:315] generating minikube-user signed cert: /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/false-732000/client.key
	I0128 11:27:44.519442   39720 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/false-732000/client.crt with IP's: []
	I0128 11:27:44.582974   39720 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/false-732000/client.crt ...
	I0128 11:27:44.582985   39720 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/false-732000/client.crt: {Name:mk24029c638af58b2e6aba26e3eb828ed3d05372 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0128 11:27:44.583336   39720 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/false-732000/client.key ...
	I0128 11:27:44.583345   39720 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/false-732000/client.key: {Name:mkde3e68137524220ee4774e361ec34691a97c70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0128 11:27:44.583575   39720 certs.go:315] generating minikube signed cert: /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/false-732000/apiserver.key.c7fa3a9e
	I0128 11:27:44.583592   39720 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/false-732000/apiserver.crt.c7fa3a9e with IP's: [192.168.67.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0128 11:27:44.765746   39720 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/false-732000/apiserver.crt.c7fa3a9e ...
	I0128 11:27:44.765761   39720 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/false-732000/apiserver.crt.c7fa3a9e: {Name:mkda67b374c7c71f34a5457a4fc572557aeed999 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0128 11:27:44.766049   39720 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/false-732000/apiserver.key.c7fa3a9e ...
	I0128 11:27:44.766059   39720 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/false-732000/apiserver.key.c7fa3a9e: {Name:mk11c25abf60ad069ca59ffd499ba7e0017bf69c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0128 11:27:44.766308   39720 certs.go:333] copying /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/false-732000/apiserver.crt.c7fa3a9e -> /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/false-732000/apiserver.crt
	I0128 11:27:44.766490   39720 certs.go:337] copying /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/false-732000/apiserver.key.c7fa3a9e -> /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/false-732000/apiserver.key
	I0128 11:27:44.766655   39720 certs.go:315] generating aggregator signed cert: /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/false-732000/proxy-client.key
	I0128 11:27:44.766671   39720 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/false-732000/proxy-client.crt with IP's: []
	I0128 11:27:44.860294   39720 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/false-732000/proxy-client.crt ...
	I0128 11:27:44.860316   39720 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/false-732000/proxy-client.crt: {Name:mk97f3625fe57fedd7a08d6fa15c9d956b69049e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0128 11:27:44.860603   39720 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/false-732000/proxy-client.key ...
	I0128 11:27:44.860611   39720 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/false-732000/proxy-client.key: {Name:mk2097e7de69c79e90500d7ededa86547bac49aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0128 11:27:44.861019   39720 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-24808/.minikube/certs/Users/jenkins/minikube-integration/15565-24808/.minikube/certs/25982.pem (1338 bytes)
	W0128 11:27:44.861065   39720 certs.go:397] ignoring /Users/jenkins/minikube-integration/15565-24808/.minikube/certs/Users/jenkins/minikube-integration/15565-24808/.minikube/certs/25982_empty.pem, impossibly tiny 0 bytes
	I0128 11:27:44.861077   39720 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-24808/.minikube/certs/Users/jenkins/minikube-integration/15565-24808/.minikube/certs/ca-key.pem (1675 bytes)
	I0128 11:27:44.861116   39720 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-24808/.minikube/certs/Users/jenkins/minikube-integration/15565-24808/.minikube/certs/ca.pem (1082 bytes)
	I0128 11:27:44.861153   39720 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-24808/.minikube/certs/Users/jenkins/minikube-integration/15565-24808/.minikube/certs/cert.pem (1123 bytes)
	I0128 11:27:44.861191   39720 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-24808/.minikube/certs/Users/jenkins/minikube-integration/15565-24808/.minikube/certs/key.pem (1675 bytes)
	I0128 11:27:44.861265   39720 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-24808/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15565-24808/.minikube/files/etc/ssl/certs/259822.pem (1708 bytes)
	I0128 11:27:44.861794   39720 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/false-732000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0128 11:27:44.880491   39720 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/false-732000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0128 11:27:44.897734   39720 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/false-732000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0128 11:27:44.915689   39720 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/false-732000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0128 11:27:44.933512   39720 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-24808/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0128 11:27:44.951701   39720 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-24808/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0128 11:27:44.969550   39720 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-24808/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0128 11:27:44.987105   39720 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-24808/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0128 11:27:45.004714   39720 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-24808/.minikube/certs/25982.pem --> /usr/share/ca-certificates/25982.pem (1338 bytes)
	I0128 11:27:45.022706   39720 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-24808/.minikube/files/etc/ssl/certs/259822.pem --> /usr/share/ca-certificates/259822.pem (1708 bytes)
	I0128 11:27:45.040746   39720 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-24808/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0128 11:27:45.058405   39720 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (772 bytes)
	I0128 11:27:45.071588   39720 ssh_runner.go:195] Run: openssl version
	I0128 11:27:45.076943   39720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/25982.pem && ln -fs /usr/share/ca-certificates/25982.pem /etc/ssl/certs/25982.pem"
	I0128 11:27:45.085295   39720 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/25982.pem
	I0128 11:27:45.089393   39720 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Jan 28 18:44 /usr/share/ca-certificates/25982.pem
	I0128 11:27:45.089441   39720 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/25982.pem
	I0128 11:27:45.095035   39720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/25982.pem /etc/ssl/certs/51391683.0"
	I0128 11:27:45.103459   39720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/259822.pem && ln -fs /usr/share/ca-certificates/259822.pem /etc/ssl/certs/259822.pem"
	I0128 11:27:45.112165   39720 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/259822.pem
	I0128 11:27:45.117013   39720 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Jan 28 18:44 /usr/share/ca-certificates/259822.pem
	I0128 11:27:45.117091   39720 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/259822.pem
	I0128 11:27:45.123618   39720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/259822.pem /etc/ssl/certs/3ec20f2e.0"
	I0128 11:27:45.133898   39720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0128 11:27:45.143439   39720 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0128 11:27:45.148228   39720 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Jan 28 18:40 /usr/share/ca-certificates/minikubeCA.pem
	I0128 11:27:45.148288   39720 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0128 11:27:45.154598   39720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0128 11:27:45.166036   39720 kubeadm.go:401] StartCluster: {Name:false-732000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:false-732000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clust
er.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0128 11:27:45.166188   39720 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0128 11:27:45.195604   39720 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0128 11:27:45.203549   39720 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0128 11:27:45.212157   39720 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0128 11:27:45.212237   39720 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0128 11:27:45.221602   39720 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0128 11:27:45.221632   39720 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0128 11:27:45.276316   39720 kubeadm.go:322] [init] Using Kubernetes version: v1.26.1
	I0128 11:27:45.276361   39720 kubeadm.go:322] [preflight] Running pre-flight checks
	I0128 11:27:45.389873   39720 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0128 11:27:45.389963   39720 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0128 11:27:45.390043   39720 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0128 11:27:45.524260   39720 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0128 11:27:44.120221   39673 api_server.go:278] https://127.0.0.1:60814/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0128 11:27:44.120243   39673 api_server.go:102] status: https://127.0.0.1:60814/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0128 11:27:44.622033   39673 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:60814/healthz ...
	I0128 11:27:44.627996   39673 api_server.go:278] https://127.0.0.1:60814/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0128 11:27:44.628013   39673 api_server.go:102] status: https://127.0.0.1:60814/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0128 11:27:45.120331   39673 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:60814/healthz ...
	I0128 11:27:45.125713   39673 api_server.go:278] https://127.0.0.1:60814/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0128 11:27:45.125732   39673 api_server.go:102] status: https://127.0.0.1:60814/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0128 11:27:45.622315   39673 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:60814/healthz ...
	I0128 11:27:45.627716   39673 api_server.go:278] https://127.0.0.1:60814/healthz returned 200:
	ok
	I0128 11:27:45.634712   39673 api_server.go:140] control plane version: v1.26.1
	I0128 11:27:45.634729   39673 api_server.go:130] duration metric: took 3.892253133s to wait for apiserver health ...
	I0128 11:27:45.634734   39673 cni.go:84] Creating CNI manager for ""
	I0128 11:27:45.634742   39673 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0128 11:27:45.654617   39673 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0128 11:27:45.691562   39673 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0128 11:27:45.700585   39673 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0128 11:27:45.714332   39673 system_pods.go:43] waiting for kube-system pods to appear ...
	I0128 11:27:45.719912   39673 system_pods.go:59] 5 kube-system pods found
	I0128 11:27:45.719928   39673 system_pods.go:61] "etcd-kubernetes-upgrade-325000" [15bb89ae-87ef-422d-8f0e-7e96690dcd7a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0128 11:27:45.719936   39673 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-325000" [289d838c-f2c6-4317-b216-4cf6f601acb2] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0128 11:27:45.719943   39673 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-325000" [4b54c7da-d000-41ca-b973-5258894ba5d0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0128 11:27:45.719950   39673 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-325000" [e7c12edf-8fdf-45a2-b985-c4f403dcf3d0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0128 11:27:45.719956   39673 system_pods.go:61] "storage-provisioner" [0baf3956-ec5d-4cd3-9332-0dc220beb3a7] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling..)
	I0128 11:27:45.719960   39673 system_pods.go:74] duration metric: took 5.61636ms to wait for pod list to return data ...
	I0128 11:27:45.719965   39673 node_conditions.go:102] verifying NodePressure condition ...
	I0128 11:27:45.723574   39673 node_conditions.go:122] node storage ephemeral capacity is 107016164Ki
	I0128 11:27:45.723588   39673 node_conditions.go:123] node cpu capacity is 6
	I0128 11:27:45.723599   39673 node_conditions.go:105] duration metric: took 3.630076ms to run NodePressure ...
	I0128 11:27:45.723614   39673 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0128 11:27:45.866541   39673 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0128 11:27:45.875535   39673 ops.go:34] apiserver oom_adj: -16
	I0128 11:27:45.875547   39673 kubeadm.go:637] restartCluster took 9.836977049s
	I0128 11:27:45.875556   39673 kubeadm.go:403] StartCluster complete in 9.896034768s
	I0128 11:27:45.875567   39673 settings.go:142] acquiring lock: {Name:mkb81e67ff3b64beaca5a3176f054172b211c785 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0128 11:27:45.875691   39673 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/15565-24808/kubeconfig
	I0128 11:27:45.876219   39673 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-24808/kubeconfig: {Name:mkd8086baee7daec2b28ba7939ebfa1d8419f5f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0128 11:27:45.876542   39673 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0128 11:27:45.876533   39673 addons.go:489] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0128 11:27:45.876612   39673 addons.go:65] Setting storage-provisioner=true in profile "kubernetes-upgrade-325000"
	I0128 11:27:45.876630   39673 addons.go:65] Setting default-storageclass=true in profile "kubernetes-upgrade-325000"
	I0128 11:27:45.876638   39673 addons.go:227] Setting addon storage-provisioner=true in "kubernetes-upgrade-325000"
	W0128 11:27:45.876646   39673 addons.go:236] addon storage-provisioner should already be in state true
	I0128 11:27:45.876660   39673 config.go:180] Loaded profile config "kubernetes-upgrade-325000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0128 11:27:45.876662   39673 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kubernetes-upgrade-325000"
	I0128 11:27:45.876691   39673 host.go:66] Checking if "kubernetes-upgrade-325000" exists ...
	I0128 11:27:45.876979   39673 kapi.go:59] client config for kubernetes-upgrade-325000: &rest.Config{Host:"https://127.0.0.1:60814", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/kubernetes-upgrade-325000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/kubernetes-upgrade-325000/client.key", CAFile:"/Users/jenkins/minikube-integration/15565-24808/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil)
, CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2449fa0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0128 11:27:45.877055   39673 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-325000 --format={{.State.Status}}
	I0128 11:27:45.877133   39673 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-325000 --format={{.State.Status}}
	I0128 11:27:45.884024   39673 kapi.go:248] "coredns" deployment in "kube-system" namespace and "kubernetes-upgrade-325000" context rescaled to 1 replicas
	I0128 11:27:45.884057   39673 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0128 11:27:45.905208   39673 out.go:177] * Verifying Kubernetes components...
	I0128 11:27:45.978883   39673 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0128 11:27:45.985948   39673 start.go:892] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0128 11:27:45.992484   39673 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-325000
	I0128 11:27:46.072775   39673 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0128 11:27:46.052256   39673 kapi.go:59] client config for kubernetes-upgrade-325000: &rest.Config{Host:"https://127.0.0.1:60814", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/kubernetes-upgrade-325000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/kubernetes-upgrade-325000/client.key", CAFile:"/Users/jenkins/minikube-integration/15565-24808/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil)
, CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2449fa0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0128 11:27:46.080705   39673 addons.go:227] Setting addon default-storageclass=true in "kubernetes-upgrade-325000"
	W0128 11:27:46.093878   39673 addons.go:236] addon default-storageclass should already be in state true
	I0128 11:27:46.093896   39673 host.go:66] Checking if "kubernetes-upgrade-325000" exists ...
	I0128 11:27:46.093943   39673 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0128 11:27:46.093957   39673 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0128 11:27:46.094024   39673 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-325000
	I0128 11:27:46.095218   39673 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-325000 --format={{.State.Status}}
	I0128 11:27:46.100784   39673 api_server.go:51] waiting for apiserver process to appear ...
	I0128 11:27:46.100870   39673 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:27:46.114396   39673 api_server.go:71] duration metric: took 230.310031ms to wait for apiserver process to appear ...
	I0128 11:27:46.114423   39673 api_server.go:87] waiting for apiserver healthz status ...
	I0128 11:27:46.114436   39673 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:60814/healthz ...
	I0128 11:27:46.121260   39673 api_server.go:278] https://127.0.0.1:60814/healthz returned 200:
	ok
	I0128 11:27:46.123846   39673 api_server.go:140] control plane version: v1.26.1
	I0128 11:27:46.123857   39673 api_server.go:130] duration metric: took 9.429113ms to wait for apiserver health ...
	I0128 11:27:46.123863   39673 system_pods.go:43] waiting for kube-system pods to appear ...
	I0128 11:27:46.129080   39673 system_pods.go:59] 5 kube-system pods found
	I0128 11:27:46.129100   39673 system_pods.go:61] "etcd-kubernetes-upgrade-325000" [15bb89ae-87ef-422d-8f0e-7e96690dcd7a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0128 11:27:46.129113   39673 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-325000" [289d838c-f2c6-4317-b216-4cf6f601acb2] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0128 11:27:46.129121   39673 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-325000" [4b54c7da-d000-41ca-b973-5258894ba5d0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0128 11:27:46.129127   39673 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-325000" [e7c12edf-8fdf-45a2-b985-c4f403dcf3d0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0128 11:27:46.129131   39673 system_pods.go:61] "storage-provisioner" [0baf3956-ec5d-4cd3-9332-0dc220beb3a7] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling..)
	I0128 11:27:46.129137   39673 system_pods.go:74] duration metric: took 5.269959ms to wait for pod list to return data ...
	I0128 11:27:46.129146   39673 kubeadm.go:578] duration metric: took 245.06614ms to wait for : map[apiserver:true system_pods:true] ...
	I0128 11:27:46.129158   39673 node_conditions.go:102] verifying NodePressure condition ...
	I0128 11:27:46.132452   39673 node_conditions.go:122] node storage ephemeral capacity is 107016164Ki
	I0128 11:27:46.132465   39673 node_conditions.go:123] node cpu capacity is 6
	I0128 11:27:46.132474   39673 node_conditions.go:105] duration metric: took 3.313069ms to run NodePressure ...
	I0128 11:27:46.132482   39673 start.go:228] waiting for startup goroutines ...
	I0128 11:27:46.167373   39673 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
	I0128 11:27:46.167389   39673 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0128 11:27:46.167479   39673 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-325000
	I0128 11:27:46.168365   39673 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60810 SSHKeyPath:/Users/jenkins/minikube-integration/15565-24808/.minikube/machines/kubernetes-upgrade-325000/id_rsa Username:docker}
	I0128 11:27:46.241295   39673 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60810 SSHKeyPath:/Users/jenkins/minikube-integration/15565-24808/.minikube/machines/kubernetes-upgrade-325000/id_rsa Username:docker}
	I0128 11:27:46.274388   39673 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0128 11:27:46.344548   39673 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0128 11:27:46.945739   39673 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0128 11:27:46.965466   39673 addons.go:492] enable addons completed in 1.088915903s: enabled=[storage-provisioner default-storageclass]
	I0128 11:27:46.965498   39673 start.go:233] waiting for cluster config update ...
	I0128 11:27:46.965509   39673 start.go:240] writing updated cluster config ...
	I0128 11:27:46.965842   39673 ssh_runner.go:195] Run: rm -f paused
	I0128 11:27:47.007943   39673 start.go:555] kubectl: 1.25.4, cluster: 1.26.1 (minor skew: 1)
	I0128 11:27:47.029466   39673 out.go:177] * Done! kubectl is now configured to use "kubernetes-upgrade-325000" cluster and "default" namespace by default
	I0128 11:27:45.546470   39720 out.go:204]   - Generating certificates and keys ...
	I0128 11:27:45.546564   39720 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0128 11:27:45.546631   39720 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0128 11:27:45.712729   39720 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0128 11:27:45.913473   39720 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0128 11:27:45.966251   39720 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0128 11:27:46.102287   39720 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0128 11:27:46.327349   39720 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0128 11:27:46.327491   39720 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [false-732000 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	I0128 11:27:46.676055   39720 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0128 11:27:46.676173   39720 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [false-732000 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	I0128 11:27:46.742867   39720 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0128 11:27:47.003675   39720 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0128 11:27:47.113705   39720 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0128 11:27:47.114566   39720 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0128 11:27:47.224275   39720 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0128 11:27:47.318381   39720 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0128 11:27:47.536354   39720 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0128 11:27:47.632501   39720 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0128 11:27:47.645238   39720 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0128 11:27:47.645802   39720 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0128 11:27:47.645853   39720 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0128 11:27:47.728787   39720 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	
	* 
	* ==> Docker <==
	* -- Logs begin at Sat 2023-01-28 19:22:51 UTC, end at Sat 2023-01-28 19:27:48 UTC. --
	Jan 28 19:27:32 kubernetes-upgrade-325000 dockerd[11537]: time="2023-01-28T19:27:32.457820617Z" level=info msg="Starting up"
	Jan 28 19:27:32 kubernetes-upgrade-325000 dockerd[11537]: time="2023-01-28T19:27:32.459830523Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jan 28 19:27:32 kubernetes-upgrade-325000 dockerd[11537]: time="2023-01-28T19:27:32.459871927Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jan 28 19:27:32 kubernetes-upgrade-325000 dockerd[11537]: time="2023-01-28T19:27:32.459893213Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Jan 28 19:27:32 kubernetes-upgrade-325000 dockerd[11537]: time="2023-01-28T19:27:32.459902110Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jan 28 19:27:32 kubernetes-upgrade-325000 dockerd[11537]: time="2023-01-28T19:27:32.461291079Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jan 28 19:27:32 kubernetes-upgrade-325000 dockerd[11537]: time="2023-01-28T19:27:32.461336865Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jan 28 19:27:32 kubernetes-upgrade-325000 dockerd[11537]: time="2023-01-28T19:27:32.461358809Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Jan 28 19:27:32 kubernetes-upgrade-325000 dockerd[11537]: time="2023-01-28T19:27:32.461373700Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jan 28 19:27:32 kubernetes-upgrade-325000 dockerd[11537]: time="2023-01-28T19:27:32.483367699Z" level=info msg="Loading containers: start."
	Jan 28 19:27:32 kubernetes-upgrade-325000 dockerd[11537]: time="2023-01-28T19:27:32.628058278Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jan 28 19:27:32 kubernetes-upgrade-325000 dockerd[11537]: time="2023-01-28T19:27:32.676426285Z" level=info msg="Loading containers: done."
	Jan 28 19:27:32 kubernetes-upgrade-325000 dockerd[11537]: time="2023-01-28T19:27:32.717260939Z" level=info msg="Docker daemon" commit=6051f14 graphdriver(s)=overlay2 version=20.10.23
	Jan 28 19:27:32 kubernetes-upgrade-325000 dockerd[11537]: time="2023-01-28T19:27:32.717353340Z" level=info msg="Daemon has completed initialization"
	Jan 28 19:27:32 kubernetes-upgrade-325000 systemd[1]: Started Docker Application Container Engine.
	Jan 28 19:27:32 kubernetes-upgrade-325000 dockerd[11537]: time="2023-01-28T19:27:32.749878556Z" level=info msg="API listen on [::]:2376"
	Jan 28 19:27:32 kubernetes-upgrade-325000 dockerd[11537]: time="2023-01-28T19:27:32.757030013Z" level=info msg="API listen on /var/run/docker.sock"
	Jan 28 19:27:38 kubernetes-upgrade-325000 dockerd[11537]: time="2023-01-28T19:27:38.528307268Z" level=info msg="ignoring event" container=cbe2c2b6815bf85d2f3f1b221c72c0f3c049e4014eb920cf01d62f58d584b157 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 28 19:27:38 kubernetes-upgrade-325000 dockerd[11537]: time="2023-01-28T19:27:38.542902954Z" level=info msg="ignoring event" container=4c1b10ece32c2c3b1a21aa7902b798fbb6ff693cc7b8b953cb7cdba14168c0ae module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 28 19:27:38 kubernetes-upgrade-325000 dockerd[11537]: time="2023-01-28T19:27:38.543109541Z" level=info msg="ignoring event" container=71a9f6510af004deb5d835c0329aa636b180bf77a84e3c6d289ce8974e631dff module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 28 19:27:38 kubernetes-upgrade-325000 dockerd[11537]: time="2023-01-28T19:27:38.544736318Z" level=info msg="ignoring event" container=17c2cc1d919516a7e5f481c1cac253fbef1569414eec53e270ad58b62a1d1b0a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 28 19:27:38 kubernetes-upgrade-325000 dockerd[11537]: time="2023-01-28T19:27:38.544764554Z" level=info msg="ignoring event" container=56b00741d00d5b2eb344c5fe4589cda9177a083be10812fab45f5e15f00b8bbf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 28 19:27:38 kubernetes-upgrade-325000 dockerd[11537]: time="2023-01-28T19:27:38.546659519Z" level=info msg="ignoring event" container=6ae6fd33998697f245df50f7fdb628f2cb63ab2959905b0fa07f9d238cfeb170 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 28 19:27:38 kubernetes-upgrade-325000 dockerd[11537]: time="2023-01-28T19:27:38.690166663Z" level=info msg="ignoring event" container=55e66d04fbfff3b85cec6f36a36116899e76324aee52dd35d0e8a753f6447719 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 28 19:27:39 kubernetes-upgrade-325000 dockerd[11537]: time="2023-01-28T19:27:39.118258450Z" level=info msg="ignoring event" container=b2b63d7a8766a0e9a936a0d5bb1fde7a5e37bf0b62ff199e57d4d6aaac18b639 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	e8596bf21e6da       655493523f607       7 seconds ago       Running             kube-scheduler            2                   310b830260cae
	110d522e5fe98       e9c08e11b07f6       7 seconds ago       Running             kube-controller-manager   2                   50c7a233f20e8
	a1734a7501cc5       deb04688c4a35       7 seconds ago       Running             kube-apiserver            2                   d7fea14b295c0
	59d0aee3bd2d7       fce326961ae2d       7 seconds ago       Running             etcd                      2                   1345e77911028
	55e66d04fbfff       fce326961ae2d       15 seconds ago      Exited              etcd                      1                   71a9f6510af00
	b2b63d7a8766a       deb04688c4a35       15 seconds ago      Exited              kube-apiserver            1                   cbe2c2b6815bf
	56b00741d00d5       e9c08e11b07f6       15 seconds ago      Exited              kube-controller-manager   1                   4c1b10ece32c2
	17c2cc1d91951       655493523f607       15 seconds ago      Exited              kube-scheduler            1                   6ae6fd3399869
	
	* 
	* ==> describe nodes <==
	* Name:               kubernetes-upgrade-325000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-325000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=de6b793df87ce97412b1036a9faf51b6044637c8
	                    minikube.k8s.io/name=kubernetes-upgrade-325000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_01_28T11_27_24_0700
	                    minikube.k8s.io/version=v1.29.0-1674856271-15565
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 28 Jan 2023 19:27:20 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-325000
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 28 Jan 2023 19:27:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 28 Jan 2023 19:27:44 +0000   Sat, 28 Jan 2023 19:27:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 28 Jan 2023 19:27:44 +0000   Sat, 28 Jan 2023 19:27:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 28 Jan 2023 19:27:44 +0000   Sat, 28 Jan 2023 19:27:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 28 Jan 2023 19:27:44 +0000   Sat, 28 Jan 2023 19:27:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    kubernetes-upgrade-325000
	Capacity:
	  cpu:                6
	  ephemeral-storage:  107016164Ki
	  hugepages-2Mi:      0
	  memory:             6085660Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  107016164Ki
	  hugepages-2Mi:      0
	  memory:             6085660Ki
	  pods:               110
	System Info:
	  Machine ID:                 f1a46cb41c9d45969ef9bdf4a48d9b28
	  System UUID:                f1a46cb41c9d45969ef9bdf4a48d9b28
	  Boot ID:                    c765e7ef-84c4-4fcd-87bf-b93a3be140da
	  Kernel Version:             5.15.49-linuxkit
	  OS Image:                   Ubuntu 20.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.23
	  Kubelet Version:            v1.26.1
	  Kube-Proxy Version:         v1.26.1
	Non-terminated Pods:          (4 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-kubernetes-upgrade-325000                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         25s
	  kube-system                 kube-apiserver-kubernetes-upgrade-325000             250m (4%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-325000    200m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25s
	  kube-system                 kube-scheduler-kubernetes-upgrade-325000             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (10%!)(MISSING)  0 (0%!)(MISSING)
	  memory             100Mi (1%!)(MISSING)  0 (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age              From     Message
	  ----    ------                   ----             ----     -------
	  Normal  Starting                 25s              kubelet  Starting kubelet.
	  Normal  NodeAllocatableEnforced  25s              kubelet  Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  25s              kubelet  Node kubernetes-upgrade-325000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    25s              kubelet  Node kubernetes-upgrade-325000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     25s              kubelet  Node kubernetes-upgrade-325000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                24s              kubelet  Node kubernetes-upgrade-325000 status is now: NodeReady
	  Normal  Starting                 9s               kubelet  Starting kubelet.
	  Normal  NodeHasSufficientMemory  9s (x6 over 9s)  kubelet  Node kubernetes-upgrade-325000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9s (x5 over 9s)  kubelet  Node kubernetes-upgrade-325000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9s (x5 over 9s)  kubelet  Node kubernetes-upgrade-325000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9s               kubelet  Updated Node Allocatable limit across pods
	
	* 
	* ==> dmesg <==
	* [  +0.000210] FS-Cache: O-key=[8] 'c184920400000000'
	[  +0.000038] FS-Cache: N-cookie c=0000001c [p=00000014 fl=2 nc=0 na=1]
	[  +0.000066] FS-Cache: N-cookie d=00000000236661e7{9p.inode} n=000000000bf01250
	[  +0.000039] FS-Cache: N-key=[8] 'c184920400000000'
	[  +0.003308] FS-Cache: Duplicate cookie detected
	[  +0.000036] FS-Cache: O-cookie c=00000016 [p=00000014 fl=226 nc=0 na=1]
	[  +0.000080] FS-Cache: O-cookie d=00000000236661e7{9p.inode} n=000000006493923b
	[  +0.000078] FS-Cache: O-key=[8] 'c184920400000000'
	[  +0.000043] FS-Cache: N-cookie c=0000001d [p=00000014 fl=2 nc=0 na=1]
	[  +0.000036] FS-Cache: N-cookie d=00000000236661e7{9p.inode} n=00000000762b654d
	[  +0.000058] FS-Cache: N-key=[8] 'c184920400000000'
	[  +3.702192] FS-Cache: Duplicate cookie detected
	[  +0.000107] FS-Cache: O-cookie c=00000017 [p=00000014 fl=226 nc=0 na=1]
	[  +0.000035] FS-Cache: O-cookie d=00000000236661e7{9p.inode} n=00000000a6130ac5
	[  +0.000046] FS-Cache: O-key=[8] 'c084920400000000'
	[  +0.000051] FS-Cache: N-cookie c=00000020 [p=00000014 fl=2 nc=0 na=1]
	[  +0.000053] FS-Cache: N-cookie d=00000000236661e7{9p.inode} n=000000006d36f49f
	[  +0.000036] FS-Cache: N-key=[8] 'c084920400000000'
	[  +0.498122] FS-Cache: Duplicate cookie detected
	[  +0.000072] FS-Cache: O-cookie c=0000001a [p=00000014 fl=226 nc=0 na=1]
	[  +0.000091] FS-Cache: O-cookie d=00000000236661e7{9p.inode} n=00000000352e521a
	[  +0.000076] FS-Cache: O-key=[8] 'c884920400000000'
	[  +0.000094] FS-Cache: N-cookie c=00000021 [p=00000014 fl=2 nc=0 na=1]
	[  +0.000096] FS-Cache: N-cookie d=00000000236661e7{9p.inode} n=00000000be2f0210
	[  +0.000101] FS-Cache: N-key=[8] 'c884920400000000'
	
	* 
	* ==> etcd [55e66d04fbff] <==
	* {"level":"info","ts":"2023-01-28T19:27:33.757Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2023-01-28T19:27:33.757Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-01-28T19:27:33.757Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-01-28T19:27:34.743Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2023-01-28T19:27:34.743Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2023-01-28T19:27:34.743Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2023-01-28T19:27:34.743Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2023-01-28T19:27:34.743Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2023-01-28T19:27:34.743Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2023-01-28T19:27:34.743Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2023-01-28T19:27:34.746Z","caller":"etcdserver/server.go:2054","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:kubernetes-upgrade-325000 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2023-01-28T19:27:34.746Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-01-28T19:27:34.746Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-01-28T19:27:34.746Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-01-28T19:27:34.746Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-01-28T19:27:34.747Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2023-01-28T19:27:34.747Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-01-28T19:27:37.300Z","caller":"traceutil/trace.go:171","msg":"trace[2026165012] transaction","detail":"{read_only:false; response_revision:300; number_of_response:1; }","duration":"220.098825ms","start":"2023-01-28T19:27:37.080Z","end":"2023-01-28T19:27:37.300Z","steps":["trace[2026165012] 'process raft request'  (duration: 220.008249ms)"],"step_count":1}
	{"level":"info","ts":"2023-01-28T19:27:38.460Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2023-01-28T19:27:38.460Z","caller":"embed/etcd.go:373","msg":"closing etcd server","name":"kubernetes-upgrade-325000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	WARNING: 2023/01/28 19:27:38 [core] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"info","ts":"2023-01-28T19:27:38.523Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"ea7e25599daad906","current-leader-member-id":"ea7e25599daad906"}
	{"level":"info","ts":"2023-01-28T19:27:38.659Z","caller":"embed/etcd.go:568","msg":"stopping serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2023-01-28T19:27:38.659Z","caller":"embed/etcd.go:573","msg":"stopped serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2023-01-28T19:27:38.659Z","caller":"embed/etcd.go:375","msg":"closed etcd server","name":"kubernetes-upgrade-325000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	
	* 
	* ==> etcd [59d0aee3bd2d] <==
	* {"level":"info","ts":"2023-01-28T19:27:41.448Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-01-28T19:27:41.449Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-01-28T19:27:41.449Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2023-01-28T19:27:41.449Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-01-28T19:27:41.449Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2023-01-28T19:27:41.449Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-01-28T19:27:41.458Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-01-28T19:27:41.458Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-01-28T19:27:41.458Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-01-28T19:27:41.458Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2023-01-28T19:27:41.458Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2023-01-28T19:27:42.929Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 3"}
	{"level":"info","ts":"2023-01-28T19:27:42.929Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 3"}
	{"level":"info","ts":"2023-01-28T19:27:42.929Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2023-01-28T19:27:42.929Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 4"}
	{"level":"info","ts":"2023-01-28T19:27:42.929Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 4"}
	{"level":"info","ts":"2023-01-28T19:27:42.929Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 4"}
	{"level":"info","ts":"2023-01-28T19:27:42.929Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 4"}
	{"level":"info","ts":"2023-01-28T19:27:42.930Z","caller":"etcdserver/server.go:2054","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:kubernetes-upgrade-325000 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2023-01-28T19:27:42.930Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-01-28T19:27:42.930Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-01-28T19:27:42.932Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-01-28T19:27:42.932Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2023-01-28T19:27:42.932Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-01-28T19:27:42.932Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	* 
	* ==> kernel <==
	*  19:27:49 up  2:27,  0 users,  load average: 4.19, 2.03, 1.53
	Linux kubernetes-upgrade-325000 5.15.49-linuxkit #1 SMP Tue Sep 13 07:51:46 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.5 LTS"
	
	* 
	* ==> kube-apiserver [a1734a7501cc] <==
	* I0128 19:27:44.111099       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
	I0128 19:27:44.111136       1 shared_informer.go:273] Waiting for caches to sync for cluster_authentication_trust_controller
	I0128 19:27:44.111158       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0128 19:27:44.111193       1 shared_informer.go:273] Waiting for caches to sync for crd-autoregister
	I0128 19:27:44.111303       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0128 19:27:44.111860       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0128 19:27:44.111999       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0128 19:27:44.109482       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I0128 19:27:44.205905       1 shared_informer.go:280] Caches are synced for node_authorizer
	I0128 19:27:44.209193       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0128 19:27:44.211105       1 apf_controller.go:366] Running API Priority and Fairness config worker
	I0128 19:27:44.211200       1 apf_controller.go:369] Running API Priority and Fairness periodic rebalancing process
	I0128 19:27:44.211808       1 cache.go:39] Caches are synced for autoregister controller
	I0128 19:27:44.209441       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0128 19:27:44.212542       1 shared_informer.go:280] Caches are synced for crd-autoregister
	I0128 19:27:44.212571       1 shared_informer.go:280] Caches are synced for configmaps
	I0128 19:27:44.212580       1 shared_informer.go:280] Caches are synced for cluster_authentication_trust_controller
	I0128 19:27:44.243534       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0128 19:27:44.906790       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0128 19:27:45.114555       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0128 19:27:45.801240       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0128 19:27:45.811164       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0128 19:27:45.835520       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0128 19:27:45.852099       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0128 19:27:45.858234       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	* 
	* ==> kube-apiserver [b2b63d7a8766] <==
	*   "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W0128 19:27:38.474828       1 logging.go:59] [core] [Channel #148 SubChannel #149] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	I0128 19:27:38.475506       1 cluster_authentication_trust_controller.go:463] Shutting down cluster_authentication_trust_controller controller
	I0128 19:27:38.475517       1 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	W0128 19:27:38.505425       1 logging.go:59] [core] [Channel #157 SubChannel #158] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	I0128 19:27:38.507296       1 dynamic_serving_content.go:146] "Shutting down controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	
	* 
	* ==> kube-controller-manager [110d522e5fe9] <==
	* I0128 19:27:46.307304       1 controllermanager.go:622] Started "ttl"
	I0128 19:27:46.307377       1 ttl_controller.go:120] Starting TTL controller
	I0128 19:27:46.307384       1 shared_informer.go:273] Waiting for caches to sync for TTL
	I0128 19:27:46.353264       1 controllermanager.go:622] Started "bootstrapsigner"
	I0128 19:27:46.353378       1 shared_informer.go:273] Waiting for caches to sync for bootstrap_signer
	I0128 19:27:46.408372       1 controllermanager.go:622] Started "tokencleaner"
	I0128 19:27:46.408465       1 tokencleaner.go:111] Starting token cleaner controller
	I0128 19:27:46.408472       1 shared_informer.go:273] Waiting for caches to sync for token_cleaner
	I0128 19:27:46.408479       1 shared_informer.go:280] Caches are synced for token_cleaner
	I0128 19:27:46.554826       1 controllermanager.go:622] Started "attachdetach"
	I0128 19:27:46.554915       1 attach_detach_controller.go:328] Starting attach detach controller
	I0128 19:27:46.554922       1 shared_informer.go:273] Waiting for caches to sync for attach detach
	I0128 19:27:46.602681       1 controllermanager.go:622] Started "serviceaccount"
	I0128 19:27:46.602749       1 serviceaccounts_controller.go:111] Starting service account controller
	I0128 19:27:46.602755       1 shared_informer.go:273] Waiting for caches to sync for service account
	I0128 19:27:46.656768       1 controllermanager.go:622] Started "daemonset"
	I0128 19:27:46.656872       1 daemon_controller.go:265] Starting daemon sets controller
	I0128 19:27:46.656878       1 shared_informer.go:273] Waiting for caches to sync for daemon sets
	I0128 19:27:46.702627       1 controllermanager.go:622] Started "replicaset"
	I0128 19:27:46.702727       1 replica_set.go:201] Starting replicaset controller
	I0128 19:27:46.702733       1 shared_informer.go:273] Waiting for caches to sync for ReplicaSet
	I0128 19:27:46.953615       1 controllermanager.go:622] Started "garbagecollector"
	I0128 19:27:46.953911       1 garbagecollector.go:154] Starting garbage collector controller
	I0128 19:27:46.954122       1 shared_informer.go:273] Waiting for caches to sync for garbage collector
	I0128 19:27:46.954160       1 graph_builder.go:291] GraphBuilder running
	
	* 
	* ==> kube-controller-manager [56b00741d00d] <==
	* I0128 19:27:34.152075       1 serving.go:348] Generated self-signed cert in-memory
	I0128 19:27:35.431004       1 controllermanager.go:182] Version: v1.26.1
	I0128 19:27:35.431064       1 controllermanager.go:184] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0128 19:27:35.432446       1 secure_serving.go:210] Serving securely on 127.0.0.1:10257
	I0128 19:27:35.433025       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0128 19:27:35.433044       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0128 19:27:35.432577       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	
	* 
	* ==> kube-scheduler [17c2cc1d9195] <==
	* I0128 19:27:34.312568       1 serving.go:348] Generated self-signed cert in-memory
	W0128 19:27:36.808869       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0128 19:27:36.808916       1 authentication.go:349] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0128 19:27:36.808925       1 authentication.go:350] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0128 19:27:36.808931       1 authentication.go:351] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0128 19:27:36.820156       1 server.go:152] "Starting Kubernetes Scheduler" version="v1.26.1"
	I0128 19:27:36.820206       1 server.go:154] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0128 19:27:36.825587       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0128 19:27:36.825714       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0128 19:27:36.826007       1 shared_informer.go:273] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0128 19:27:36.826213       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0128 19:27:36.926290       1 shared_informer.go:280] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0128 19:27:38.520558       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0128 19:27:38.520673       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0128 19:27:38.520859       1 scheduling_queue.go:1065] "Error while retrieving next pod from scheduling queue" err="scheduling queue is closed"
	E0128 19:27:38.520911       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kube-scheduler [e8596bf21e6d] <==
	* I0128 19:27:42.136798       1 serving.go:348] Generated self-signed cert in-memory
	W0128 19:27:44.119708       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0128 19:27:44.119756       1 authentication.go:349] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0128 19:27:44.119781       1 authentication.go:350] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0128 19:27:44.119786       1 authentication.go:351] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0128 19:27:44.145097       1 server.go:152] "Starting Kubernetes Scheduler" version="v1.26.1"
	I0128 19:27:44.145152       1 server.go:154] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0128 19:27:44.206119       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0128 19:27:44.206356       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0128 19:27:44.206433       1 shared_informer.go:273] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0128 19:27:44.206485       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0128 19:27:44.306768       1 shared_informer.go:280] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Sat 2023-01-28 19:22:51 UTC, end at Sat 2023-01-28 19:27:50 UTC. --
	Jan 28 19:27:40 kubernetes-upgrade-325000 kubelet[12936]: I0128 19:27:40.940927   12936 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8211b1eecaa638b845a9fb3f3151cee2-flexvolume-dir\") pod \"kube-controller-manager-kubernetes-upgrade-325000\" (UID: \"8211b1eecaa638b845a9fb3f3151cee2\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-325000"
	Jan 28 19:27:40 kubernetes-upgrade-325000 kubelet[12936]: I0128 19:27:40.940940   12936 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8211b1eecaa638b845a9fb3f3151cee2-kubeconfig\") pod \"kube-controller-manager-kubernetes-upgrade-325000\" (UID: \"8211b1eecaa638b845a9fb3f3151cee2\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-325000"
	Jan 28 19:27:40 kubernetes-upgrade-325000 kubelet[12936]: I0128 19:27:40.940959   12936 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8211b1eecaa638b845a9fb3f3151cee2-usr-local-share-ca-certificates\") pod \"kube-controller-manager-kubernetes-upgrade-325000\" (UID: \"8211b1eecaa638b845a9fb3f3151cee2\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-325000"
	Jan 28 19:27:40 kubernetes-upgrade-325000 kubelet[12936]: I0128 19:27:40.959619   12936 kubelet_node_status.go:70] "Attempting to register node" node="kubernetes-upgrade-325000"
	Jan 28 19:27:40 kubernetes-upgrade-325000 kubelet[12936]: E0128 19:27:40.960016   12936 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.76.2:8443: connect: connection refused" node="kubernetes-upgrade-325000"
	Jan 28 19:27:41 kubernetes-upgrade-325000 kubelet[12936]: I0128 19:27:41.164269   12936 scope.go:115] "RemoveContainer" containerID="55e66d04fbfff3b85cec6f36a36116899e76324aee52dd35d0e8a753f6447719"
	Jan 28 19:27:41 kubernetes-upgrade-325000 kubelet[12936]: I0128 19:27:41.171763   12936 scope.go:115] "RemoveContainer" containerID="b2b63d7a8766a0e9a936a0d5bb1fde7a5e37bf0b62ff199e57d4d6aaac18b639"
	Jan 28 19:27:41 kubernetes-upgrade-325000 kubelet[12936]: I0128 19:27:41.179305   12936 scope.go:115] "RemoveContainer" containerID="56b00741d00d5b2eb344c5fe4589cda9177a083be10812fab45f5e15f00b8bbf"
	Jan 28 19:27:41 kubernetes-upgrade-325000 kubelet[12936]: I0128 19:27:41.187749   12936 scope.go:115] "RemoveContainer" containerID="17c2cc1d919516a7e5f481c1cac253fbef1569414eec53e270ad58b62a1d1b0a"
	Jan 28 19:27:41 kubernetes-upgrade-325000 kubelet[12936]: E0128 19:27:41.240188   12936 controller.go:146] failed to ensure lease exists, will retry in 800ms, error: Get "https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-325000?timeout=10s": dial tcp 192.168.76.2:8443: connect: connection refused
	Jan 28 19:27:41 kubernetes-upgrade-325000 kubelet[12936]: I0128 19:27:41.420764   12936 kubelet_node_status.go:70] "Attempting to register node" node="kubernetes-upgrade-325000"
	Jan 28 19:27:41 kubernetes-upgrade-325000 kubelet[12936]: E0128 19:27:41.421133   12936 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.76.2:8443: connect: connection refused" node="kubernetes-upgrade-325000"
	Jan 28 19:27:41 kubernetes-upgrade-325000 kubelet[12936]: W0128 19:27:41.431988   12936 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)kubernetes-upgrade-325000&limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	Jan 28 19:27:41 kubernetes-upgrade-325000 kubelet[12936]: E0128 19:27:41.432037   12936 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)kubernetes-upgrade-325000&limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	Jan 28 19:27:41 kubernetes-upgrade-325000 kubelet[12936]: W0128 19:27:41.518116   12936 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	Jan 28 19:27:41 kubernetes-upgrade-325000 kubelet[12936]: E0128 19:27:41.518209   12936 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.76.2:8443: connect: connection refused
	Jan 28 19:27:42 kubernetes-upgrade-325000 kubelet[12936]: I0128 19:27:42.229972   12936 kubelet_node_status.go:70] "Attempting to register node" node="kubernetes-upgrade-325000"
	Jan 28 19:27:44 kubernetes-upgrade-325000 kubelet[12936]: I0128 19:27:44.232588   12936 kubelet_node_status.go:108] "Node was previously registered" node="kubernetes-upgrade-325000"
	Jan 28 19:27:44 kubernetes-upgrade-325000 kubelet[12936]: I0128 19:27:44.232741   12936 kubelet_node_status.go:73] "Successfully registered node" node="kubernetes-upgrade-325000"
	Jan 28 19:27:44 kubernetes-upgrade-325000 kubelet[12936]: E0128 19:27:44.251225   12936 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"kubernetes-upgrade-325000\" not found"
	Jan 28 19:27:44 kubernetes-upgrade-325000 kubelet[12936]: E0128 19:27:44.351893   12936 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"kubernetes-upgrade-325000\" not found"
	Jan 28 19:27:44 kubernetes-upgrade-325000 kubelet[12936]: E0128 19:27:44.452755   12936 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"kubernetes-upgrade-325000\" not found"
	Jan 28 19:27:44 kubernetes-upgrade-325000 kubelet[12936]: I0128 19:27:44.629099   12936 apiserver.go:52] "Watching apiserver"
	Jan 28 19:27:44 kubernetes-upgrade-325000 kubelet[12936]: I0128 19:27:44.639475   12936 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Jan 28 19:27:44 kubernetes-upgrade-325000 kubelet[12936]: I0128 19:27:44.709021   12936 reconciler.go:41] "Reconciler: start to sync state"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-325000 -n kubernetes-upgrade-325000
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-325000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: storage-provisioner
helpers_test.go:274: ======> post-mortem[TestKubernetesUpgrade]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context kubernetes-upgrade-325000 describe pod storage-provisioner
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-325000 describe pod storage-provisioner: exit status 1 (51.504905ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context kubernetes-upgrade-325000 describe pod storage-provisioner: exit status 1
helpers_test.go:175: Cleaning up "kubernetes-upgrade-325000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p kubernetes-upgrade-325000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p kubernetes-upgrade-325000: (2.631051875s)
--- FAIL: TestKubernetesUpgrade (556.70s)

                                                
                                    
x
+
TestMissingContainerUpgrade (54.81s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:317: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.1.3274260474.exe start -p missing-upgrade-329000 --memory=2200 --driver=docker 
version_upgrade_test.go:317: (dbg) Non-zero exit: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.1.3274260474.exe start -p missing-upgrade-329000 --memory=2200 --driver=docker : exit status 78 (40.115286759s)

                                                
                                                
-- stdout --
	* [missing-upgrade-329000] minikube v1.9.1 on Darwin 13.2
	  - MINIKUBE_LOCATION=15565
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15565-24808/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-24808/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Starting control plane node m01 in cluster missing-upgrade-329000
	* Pulling base image ...
	* Downloading Kubernetes v1.18.0 preload ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	* Deleting "missing-upgrade-329000" in docker ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...

                                                
                                                
-- /stdout --
** stderr ** 
	    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 17.50 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 39.39 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 61.42 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 83.38 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 105.05 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 126.53 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 141.48 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 163.56 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 185.67 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 207.38 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 229.25 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 251.91 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
: 274.02 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 295.84 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 317.70 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 339.73 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 360.72 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 383.33 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 405.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 426.72 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 448.31 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 469.84 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 492.05 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 514.73 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 535.84 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.
lz4: 542.91 MiB! StartHost failed, but will try again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-01-28 19:18:01.890425482 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* [DOCKER_RESTART_FAILED] Failed to start docker container. "minikube start -p missing-upgrade-329000" may fix it. creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-01-28 19:18:21.251426599 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* Suggestion: Remove the incompatible --docker-opt flag if one was provided
	* Related issue: https://github.com/kubernetes/minikube/issues/7070

                                                
                                                
** /stderr **
version_upgrade_test.go:317: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.1.3274260474.exe start -p missing-upgrade-329000 --memory=2200 --driver=docker 
version_upgrade_test.go:317: (dbg) Non-zero exit: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.1.3274260474.exe start -p missing-upgrade-329000 --memory=2200 --driver=docker : exit status 70 (3.93083537s)

                                                
                                                
-- stdout --
	* [missing-upgrade-329000] minikube v1.9.1 on Darwin 13.2
	  - MINIKUBE_LOCATION=15565
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15565-24808/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-24808/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting control plane node m01 in cluster missing-upgrade-329000
	* Pulling base image ...
	* Updating the running docker "missing-upgrade-329000" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:317: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.1.3274260474.exe start -p missing-upgrade-329000 --memory=2200 --driver=docker 
E0128 11:18:33.600363   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/skaffold-054000/client.crt: no such file or directory
E0128 11:18:33.605583   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/skaffold-054000/client.crt: no such file or directory
E0128 11:18:33.615669   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/skaffold-054000/client.crt: no such file or directory
E0128 11:18:33.636069   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/skaffold-054000/client.crt: no such file or directory
E0128 11:18:33.677209   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/skaffold-054000/client.crt: no such file or directory
E0128 11:18:33.757558   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/skaffold-054000/client.crt: no such file or directory
E0128 11:18:33.918366   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/skaffold-054000/client.crt: no such file or directory
E0128 11:18:34.239046   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/skaffold-054000/client.crt: no such file or directory
version_upgrade_test.go:317: (dbg) Non-zero exit: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.1.3274260474.exe start -p missing-upgrade-329000 --memory=2200 --driver=docker : exit status 70 (4.01654189s)

                                                
                                                
-- stdout --
	* [missing-upgrade-329000] minikube v1.9.1 on Darwin 13.2
	  - MINIKUBE_LOCATION=15565
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15565-24808/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-24808/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting control plane node m01 in cluster missing-upgrade-329000
	* Pulling base image ...
	* Updating the running docker "missing-upgrade-329000" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:323: release start failed: exit status 70
panic.go:522: *** TestMissingContainerUpgrade FAILED at 2023-01-28 11:18:34.679265 -0800 PST m=+2371.454825164
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMissingContainerUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect missing-upgrade-329000
helpers_test.go:235: (dbg) docker inspect missing-upgrade-329000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "31bac7644df6f977220fb5882688f52a00b210a365deabe9c33b24d0d9d77cee",
	        "Created": "2023-01-28T19:18:10.078380643Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 566649,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-01-28T19:18:10.300862103Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:11589cdc9ef4b67a64cc243dd3cf013e81ad02bbed105fc37dc07aa272044680",
	        "ResolvConfPath": "/var/lib/docker/containers/31bac7644df6f977220fb5882688f52a00b210a365deabe9c33b24d0d9d77cee/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/31bac7644df6f977220fb5882688f52a00b210a365deabe9c33b24d0d9d77cee/hostname",
	        "HostsPath": "/var/lib/docker/containers/31bac7644df6f977220fb5882688f52a00b210a365deabe9c33b24d0d9d77cee/hosts",
	        "LogPath": "/var/lib/docker/containers/31bac7644df6f977220fb5882688f52a00b210a365deabe9c33b24d0d9d77cee/31bac7644df6f977220fb5882688f52a00b210a365deabe9c33b24d0d9d77cee-json.log",
	        "Name": "/missing-upgrade-329000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "missing-upgrade-329000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/9e1a7ae3c47ed6315b39e4c366ac8f8637cffdb0c0cc67c892ffc437768b0772-init/diff:/var/lib/docker/overlay2/f2082e8368827d702c9b897534123c77316a5f99a01a2ecc698ec89dd0e8a00b/diff:/var/lib/docker/overlay2/b7552f8ec85a58c0dc8c1055a356360ec507e18d5ac5f3773d8dcee24b70d60e/diff:/var/lib/docker/overlay2/1b71cb2eff0873f607d971cb941b8afea6e7c40a7bf5386b8d9f3404d37fb3de/diff:/var/lib/docker/overlay2/2e2f1db693cfd333d4daeb80baf4fab0f859df66206a50a784991ae746eb6b08/diff:/var/lib/docker/overlay2/df93a6dbaf0bd330b14cb706b27b98cc8c024b2cfef7dd65f9e863eb228d93c1/diff:/var/lib/docker/overlay2/e1b6999e13f526f1513a4193298162abf99a50546b397c39f376bdcba622b3e1/diff:/var/lib/docker/overlay2/f195710d7c50118df874fdf885422c431610fc6ac2010c2200ef4345c5b2d64a/diff:/var/lib/docker/overlay2/f1fc58d52bb2de6bce96d05a499221a90e72e1384317eb636dcf83396b33e7d7/diff:/var/lib/docker/overlay2/f26fa1480745883a190e1d42242bbbee96e02877913dcf41a61f54876c93cddc/diff:/var/lib/docker/overlay2/563dee
7dac001ba952f4d08587d2bfc26a88659a7277fd827fc88bc5ed3b0617/diff:/var/lib/docker/overlay2/c398ee3d451c35b0eff9bad390e6feb8327dccb33d756c0ec1aaeaf0b07561a1/diff:/var/lib/docker/overlay2/e141d730e31ee69ec1df6689fc546a4ec3853de9484de15045fc23b5a7406bc3/diff:/var/lib/docker/overlay2/ae02f9ebec64d826db3d0d14682f361dfcd86128a1846fd66ec3d014f6a890d8/diff:/var/lib/docker/overlay2/53fc81dcf65012d4c4b871f170af11946003ab3ba8946424b34edc11d3321e05/diff:/var/lib/docker/overlay2/fd0193053b8accc539c62635da0553c6caa5fd9bfe54f15ce464bd10b55508b5/diff:/var/lib/docker/overlay2/cfa8e4768a11a2570a454569de54d90d499ae40feae3858b13fb29bd8cf7ced5/diff:/var/lib/docker/overlay2/44054d6264e6bade67eb78076bcec6ecea32beb741019a1fa190b347f85b3af0/diff:/var/lib/docker/overlay2/4400651b5a8456da2e096cecb017decc6d525ef3b3f1f1ae54ad9f4956ec6168/diff:/var/lib/docker/overlay2/d3d1e0c5641b1dcc7da1481378d754114ac6a5aac7febf4a1c63d4045ce8fe09/diff:/var/lib/docker/overlay2/264806b7a4946f208a9da0e95425d8bf83cc7b27de055edf40f51307b2fe2972/diff:/var/lib/d
ocker/overlay2/4a48420b5f84f99deb556dd0c6c30624ea192d1cf9a1586f2fc8ad69fb653c8c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9e1a7ae3c47ed6315b39e4c366ac8f8637cffdb0c0cc67c892ffc437768b0772/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9e1a7ae3c47ed6315b39e4c366ac8f8637cffdb0c0cc67c892ffc437768b0772/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9e1a7ae3c47ed6315b39e4c366ac8f8637cffdb0c0cc67c892ffc437768b0772/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "missing-upgrade-329000",
	                "Source": "/var/lib/docker/volumes/missing-upgrade-329000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "missing-upgrade-329000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	                "container=docker"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "missing-upgrade-329000",
	                "name.minikube.sigs.k8s.io": "missing-upgrade-329000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "063f3e018fe64dac6ec23edebf315f1cf26b2fc4ae621cdc39c1e7c26d92f952",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "60472"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "60473"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "60474"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/063f3e018fe6",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "5b1ffdf6a5664d3645612163ff78ca7d779b286f1f0bc1b2626d20e8d18394f9",
	            "Gateway": "172.17.0.1",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "172.17.0.2",
	            "IPPrefixLen": 16,
	            "IPv6Gateway": "",
	            "MacAddress": "02:42:ac:11:00:02",
	            "Networks": {
	                "bridge": {
	                    "IPAMConfig": null,
	                    "Links": null,
	                    "Aliases": null,
	                    "NetworkID": "d8afff2198de0b56ef3b30d2c6866a99d956efc295bd21af171990714694cadb",
	                    "EndpointID": "5b1ffdf6a5664d3645612163ff78ca7d779b286f1f0bc1b2626d20e8d18394f9",
	                    "Gateway": "172.17.0.1",
	                    "IPAddress": "172.17.0.2",
	                    "IPPrefixLen": 16,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:ac:11:00:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p missing-upgrade-329000 -n missing-upgrade-329000
E0128 11:18:34.879520   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/skaffold-054000/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p missing-upgrade-329000 -n missing-upgrade-329000: exit status 6 (388.250354ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0128 11:18:35.116629   36773 status.go:415] kubeconfig endpoint: extract IP: "missing-upgrade-329000" does not appear in /Users/jenkins/minikube-integration/15565-24808/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "missing-upgrade-329000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "missing-upgrade-329000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p missing-upgrade-329000
E0128 11:18:36.161371   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/skaffold-054000/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p missing-upgrade-329000: (2.324458901s)
--- FAIL: TestMissingContainerUpgrade (54.81s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (8.66s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.29.0-1674856271-15565 on darwin
- MINIKUBE_LOCATION=15565
- KUBECONFIG=/Users/jenkins/minikube-integration/15565-24808/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current1874667429/001
* Using the hyperkit driver based on user configuration
* Downloading driver docker-machine-driver-hyperkit:
! Unable to update hyperkit driver: download: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.29.0-1674856271-15565/docker-machine-driver-hyperkit?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.29.0-1674856271-15565/docker-machine-driver-hyperkit.sha256 Dst:/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current1874667429/001/.minikube/bin/docker-machine-driver-hyperkit.download Pwd: Mode:2 Umask:---------- Detectors:[0x4726bc0 0x4726bc0 0x4726bc0 0x4726bc0 0x4726bc0 0x4726bc0 0x4726bc0] Decompressors:map[bz2:0x4726bc0 gz:0x4726bc0 tar:0x4726bc0 tar.bz2:0x4726bc0 tar.gz:0x4726bc0 tar.xz:0x4726bc0 tar.zst:0x4726bc0 tbz2:0x4726bc0 tgz:0x4726bc0 txz:0x4726bc0 tzst:0x4726bc0 xz:0x4726bc0 zip:0x4726bc0 zst:0x4726bc0] Getters:map[file:0xc000f77ca0 http:0xc0007a7090 https:0xc0007a70e0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error dow
nloading checksum file: bad response code: 404
* Downloading VM boot image ...
* Starting control plane node minikube in cluster minikube
* Download complete!
driver_install_or_update_test.go:218: invalid driver version. expected: testing, got: v1.2.0
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (8.66s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (52.85s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:191: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.3040731007.exe start -p stopped-upgrade-318000 --memory=2200 --vm-driver=docker 
E0128 11:19:55.529647   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/skaffold-054000/client.crt: no such file or directory
E0128 11:20:06.666912   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/addons-582000/client.crt: no such file or directory
version_upgrade_test.go:191: (dbg) Non-zero exit: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.3040731007.exe start -p stopped-upgrade-318000 --memory=2200 --vm-driver=docker : exit status 70 (41.547643711s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-318000] minikube v1.9.0 on Darwin 13.2
	  - MINIKUBE_LOCATION=15565
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-24808/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	  - KUBECONFIG=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/legacy_kubeconfig1834555634
	* Using the docker driver based on user configuration
	* Pulling base image ...
	* Downloading Kubernetes v1.18.0 preload ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	! StartHost failed, but will try again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-01-28 19:20:02.666574526 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* Deleting "stopped-upgrade-318000" in docker ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	* StartHost failed again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-01-28 19:20:22.549575673 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	  - Run: "minikube delete -p stopped-upgrade-318000", then "minikube start -p stopped-upgrade-318000 --alsologtostderr -v=1" to try again with more logging

                                                
                                                
-- /stdout --
** stderr ** 
	    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 15.19 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 32.92 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 51.03 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 72.45 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 94.23 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 115.59 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 136.41 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 152.42 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 174.89 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 196.84 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 218.78 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 240.89 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
: 262.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 283.89 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 303.17 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 326.11 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 347.95 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 369.86 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 391.05 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 413.01 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 435.53 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 458.37 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 480.47 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 502.53 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 523.86 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.
lz4: 539.05 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 542.91 MiB* 
	X Unable to start VM after repeated tries. Please try {{'minikube delete' if possible: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-01-28 19:20:22.549575673 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:191: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.3040731007.exe start -p stopped-upgrade-318000 --memory=2200 --vm-driver=docker 
version_upgrade_test.go:191: (dbg) Non-zero exit: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.3040731007.exe start -p stopped-upgrade-318000 --memory=2200 --vm-driver=docker : exit status 70 (4.33010013s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-318000] minikube v1.9.0 on Darwin 13.2
	  - MINIKUBE_LOCATION=15565
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-24808/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	  - KUBECONFIG=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/legacy_kubeconfig3968026307
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Updating the running docker "stopped-upgrade-318000" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:191: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.3040731007.exe start -p stopped-upgrade-318000 --memory=2200 --vm-driver=docker 
version_upgrade_test.go:191: (dbg) Non-zero exit: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube-v1.9.0.3040731007.exe start -p stopped-upgrade-318000 --memory=2200 --vm-driver=docker : exit status 70 (4.239510535s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-318000] minikube v1.9.0 on Darwin 13.2
	  - MINIKUBE_LOCATION=15565
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-24808/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	  - KUBECONFIG=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/legacy_kubeconfig1065902759
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Updating the running docker "stopped-upgrade-318000" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:197: legacy v1.9.0 start failed: exit status 70
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (52.85s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (251.93s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p old-k8s-version-182000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0
E0128 11:32:50.651877   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/auto-732000/client.crt: no such file or directory
E0128 11:33:04.430766   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/custom-flannel-732000/client.crt: no such file or directory
E0128 11:33:18.502214   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/false-732000/client.crt: no such file or directory
E0128 11:33:18.507449   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/false-732000/client.crt: no such file or directory
E0128 11:33:18.517559   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/false-732000/client.crt: no such file or directory
E0128 11:33:18.537718   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/false-732000/client.crt: no such file or directory
E0128 11:33:18.579514   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/false-732000/client.crt: no such file or directory
E0128 11:33:18.660785   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/false-732000/client.crt: no such file or directory
E0128 11:33:18.820976   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/false-732000/client.crt: no such file or directory
E0128 11:33:19.141102   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/false-732000/client.crt: no such file or directory
E0128 11:33:19.781836   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/false-732000/client.crt: no such file or directory
E0128 11:33:21.062052   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/false-732000/client.crt: no such file or directory
E0128 11:33:23.622532   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/false-732000/client.crt: no such file or directory
E0128 11:33:28.742905   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/false-732000/client.crt: no such file or directory
E0128 11:33:33.537091   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/skaffold-054000/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p old-k8s-version-182000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0: exit status 109 (4m11.430668049s)

                                                
                                                
-- stdout --
	* [old-k8s-version-182000] minikube v1.29.0-1674856271-15565 on Darwin 13.2
	  - MINIKUBE_LOCATION=15565
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15565-24808/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-24808/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node old-k8s-version-182000 in cluster old-k8s-version-182000
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* Preparing Kubernetes v1.16.0 on Docker 20.10.23 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0128 11:32:46.832068   43291 out.go:296] Setting OutFile to fd 1 ...
	I0128 11:32:46.832417   43291 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0128 11:32:46.832422   43291 out.go:309] Setting ErrFile to fd 2...
	I0128 11:32:46.832425   43291 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0128 11:32:46.832609   43291 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15565-24808/.minikube/bin
	I0128 11:32:46.833181   43291 out.go:303] Setting JSON to false
	I0128 11:32:46.854151   43291 start.go:125] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":9141,"bootTime":1674925225,"procs":399,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0128 11:32:46.854241   43291 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0128 11:32:46.877611   43291 out.go:177] * [old-k8s-version-182000] minikube v1.29.0-1674856271-15565 on Darwin 13.2
	I0128 11:32:46.920403   43291 notify.go:220] Checking for updates...
	I0128 11:32:46.941150   43291 out.go:177]   - MINIKUBE_LOCATION=15565
	I0128 11:32:46.962152   43291 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15565-24808/kubeconfig
	I0128 11:32:46.983336   43291 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0128 11:32:47.004333   43291 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0128 11:32:47.025248   43291 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-24808/.minikube
	I0128 11:32:47.046228   43291 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0128 11:32:47.067721   43291 config.go:180] Loaded profile config "calico-732000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0128 11:32:47.067774   43291 driver.go:365] Setting default libvirt URI to qemu:///system
	I0128 11:32:47.132873   43291 docker.go:141] docker version: linux-20.10.22:Docker Desktop 4.16.2 (95914)
	I0128 11:32:47.133028   43291 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0128 11:32:47.285168   43291 info.go:266] docker info: {ID:O4T2:OPZV:SRVN:GAA5:IEOW:TFNR:UPTT:2CKV:GY3E:3NIN:VS7C:3BCB Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:60 OomKillDisable:false NGoroutines:57 SystemTime:2023-01-28 19:32:47.18826115 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default
name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path:
/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0128 11:32:47.327781   43291 out.go:177] * Using the docker driver based on user configuration
	I0128 11:32:47.348807   43291 start.go:296] selected driver: docker
	I0128 11:32:47.348822   43291 start.go:857] validating driver "docker" against <nil>
	I0128 11:32:47.348835   43291 start.go:868] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0128 11:32:47.351427   43291 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0128 11:32:47.517687   43291 info.go:266] docker info: {ID:O4T2:OPZV:SRVN:GAA5:IEOW:TFNR:UPTT:2CKV:GY3E:3NIN:VS7C:3BCB Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:60 OomKillDisable:false NGoroutines:57 SystemTime:2023-01-28 19:32:47.407204294 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0128 11:32:47.517878   43291 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0128 11:32:47.518079   43291 start_flags.go:917] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0128 11:32:47.556860   43291 out.go:177] * Using Docker Desktop driver with root privileges
	I0128 11:32:47.578204   43291 cni.go:84] Creating CNI manager for ""
	I0128 11:32:47.578236   43291 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0128 11:32:47.578252   43291 start_flags.go:319] config:
	{Name:old-k8s-version-182000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-182000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocke
t: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0128 11:32:47.637908   43291 out.go:177] * Starting control plane node old-k8s-version-182000 in cluster old-k8s-version-182000
	I0128 11:32:47.659011   43291 cache.go:120] Beginning downloading kic base image for docker with docker
	I0128 11:32:47.680055   43291 out.go:177] * Pulling base image ...
	I0128 11:32:47.738388   43291 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0128 11:32:47.738466   43291 image.go:77] Checking for gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 in local docker daemon
	I0128 11:32:47.738484   43291 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15565-24808/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0128 11:32:47.738502   43291 cache.go:57] Caching tarball of preloaded images
	I0128 11:32:47.738736   43291 preload.go:174] Found /Users/jenkins/minikube-integration/15565-24808/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0128 11:32:47.738761   43291 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0128 11:32:47.739893   43291 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/old-k8s-version-182000/config.json ...
	I0128 11:32:47.740061   43291 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/old-k8s-version-182000/config.json: {Name:mk82641dee1bfbc07d915c26c36d8701394f90b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0128 11:32:47.799738   43291 image.go:81] Found gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 in local docker daemon, skipping pull
	I0128 11:32:47.799756   43291 cache.go:143] gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 exists in daemon, skipping load
	I0128 11:32:47.799775   43291 cache.go:193] Successfully downloaded all kic artifacts
	I0128 11:32:47.799828   43291 start.go:364] acquiring machines lock for old-k8s-version-182000: {Name:mk4015ba4a18ecf0d87a4f26a0f8283e87452f7f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0128 11:32:47.800005   43291 start.go:368] acquired machines lock for "old-k8s-version-182000" in 163.428µs
	I0128 11:32:47.800033   43291 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-182000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-182000 Namespace:default APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0128 11:32:47.800128   43291 start.go:125] createHost starting for "" (driver="docker")
	I0128 11:32:47.839878   43291 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0128 11:32:47.840324   43291 start.go:159] libmachine.API.Create for "old-k8s-version-182000" (driver="docker")
	I0128 11:32:47.840386   43291 client.go:168] LocalClient.Create starting
	I0128 11:32:47.840591   43291 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15565-24808/.minikube/certs/ca.pem
	I0128 11:32:47.840686   43291 main.go:141] libmachine: Decoding PEM data...
	I0128 11:32:47.840731   43291 main.go:141] libmachine: Parsing certificate...
	I0128 11:32:47.840856   43291 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15565-24808/.minikube/certs/cert.pem
	I0128 11:32:47.840942   43291 main.go:141] libmachine: Decoding PEM data...
	I0128 11:32:47.840966   43291 main.go:141] libmachine: Parsing certificate...
	I0128 11:32:47.841795   43291 cli_runner.go:164] Run: docker network inspect old-k8s-version-182000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0128 11:32:47.896874   43291 cli_runner.go:211] docker network inspect old-k8s-version-182000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0128 11:32:47.897002   43291 network_create.go:281] running [docker network inspect old-k8s-version-182000] to gather additional debugging logs...
	I0128 11:32:47.897021   43291 cli_runner.go:164] Run: docker network inspect old-k8s-version-182000
	W0128 11:32:47.951619   43291 cli_runner.go:211] docker network inspect old-k8s-version-182000 returned with exit code 1
	I0128 11:32:47.951648   43291 network_create.go:284] error running [docker network inspect old-k8s-version-182000]: docker network inspect old-k8s-version-182000: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: old-k8s-version-182000
	I0128 11:32:47.951660   43291 network_create.go:286] output of [docker network inspect old-k8s-version-182000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: old-k8s-version-182000
	
	** /stderr **
	I0128 11:32:47.951748   43291 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0128 11:32:48.007355   43291 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0128 11:32:48.008924   43291 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0128 11:32:48.010368   43291 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0128 11:32:48.010672   43291 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001292540}
	I0128 11:32:48.010682   43291 network_create.go:123] attempt to create docker network old-k8s-version-182000 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0128 11:32:48.010768   43291 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-182000 old-k8s-version-182000
	I0128 11:32:48.097334   43291 network_create.go:107] docker network old-k8s-version-182000 192.168.76.0/24 created
	I0128 11:32:48.097366   43291 kic.go:117] calculated static IP "192.168.76.2" for the "old-k8s-version-182000" container
	I0128 11:32:48.097470   43291 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0128 11:32:48.152343   43291 cli_runner.go:164] Run: docker volume create old-k8s-version-182000 --label name.minikube.sigs.k8s.io=old-k8s-version-182000 --label created_by.minikube.sigs.k8s.io=true
	I0128 11:32:48.206146   43291 oci.go:103] Successfully created a docker volume old-k8s-version-182000
	I0128 11:32:48.206268   43291 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-182000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-182000 --entrypoint /usr/bin/test -v old-k8s-version-182000:/var gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 -d /var/lib
	I0128 11:32:48.697252   43291 oci.go:107] Successfully prepared a docker volume old-k8s-version-182000
	I0128 11:32:48.697309   43291 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0128 11:32:48.697325   43291 kic.go:190] Starting extracting preloaded images to volume ...
	I0128 11:32:48.697442   43291 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15565-24808/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-182000:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 -I lz4 -xf /preloaded.tar -C /extractDir
	I0128 11:32:55.118671   43291 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15565-24808/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-182000:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 -I lz4 -xf /preloaded.tar -C /extractDir: (6.421155772s)
	I0128 11:32:55.118691   43291 kic.go:199] duration metric: took 6.421351 seconds to extract preloaded images to volume
	I0128 11:32:55.118796   43291 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0128 11:32:55.259761   43291 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-182000 --name old-k8s-version-182000 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-182000 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-182000 --network old-k8s-version-182000 --ip 192.168.76.2 --volume old-k8s-version-182000:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15
	I0128 11:32:55.607022   43291 cli_runner.go:164] Run: docker container inspect old-k8s-version-182000 --format={{.State.Running}}
	I0128 11:32:55.671096   43291 cli_runner.go:164] Run: docker container inspect old-k8s-version-182000 --format={{.State.Status}}
	I0128 11:32:55.733399   43291 cli_runner.go:164] Run: docker exec old-k8s-version-182000 stat /var/lib/dpkg/alternatives/iptables
	I0128 11:32:55.863842   43291 oci.go:144] the created container "old-k8s-version-182000" has a running status.
	I0128 11:32:55.863871   43291 kic.go:221] Creating ssh key for kic: /Users/jenkins/minikube-integration/15565-24808/.minikube/machines/old-k8s-version-182000/id_rsa...
	I0128 11:32:55.935387   43291 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/15565-24808/.minikube/machines/old-k8s-version-182000/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0128 11:32:56.078803   43291 cli_runner.go:164] Run: docker container inspect old-k8s-version-182000 --format={{.State.Status}}
	I0128 11:32:56.140489   43291 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0128 11:32:56.140508   43291 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-182000 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0128 11:32:56.244631   43291 cli_runner.go:164] Run: docker container inspect old-k8s-version-182000 --format={{.State.Status}}
	I0128 11:32:56.303074   43291 machine.go:88] provisioning docker machine ...
	I0128 11:32:56.303116   43291 ubuntu.go:169] provisioning hostname "old-k8s-version-182000"
	I0128 11:32:56.303210   43291 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-182000
	I0128 11:32:56.359811   43291 main.go:141] libmachine: Using SSH client type: native
	I0128 11:32:56.360024   43291 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 62637 <nil> <nil>}
	I0128 11:32:56.360040   43291 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-182000 && echo "old-k8s-version-182000" | sudo tee /etc/hostname
	I0128 11:32:56.502228   43291 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-182000
	
	I0128 11:32:56.502317   43291 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-182000
	I0128 11:32:56.559072   43291 main.go:141] libmachine: Using SSH client type: native
	I0128 11:32:56.559243   43291 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 62637 <nil> <nil>}
	I0128 11:32:56.559262   43291 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-182000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-182000/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-182000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0128 11:32:56.693276   43291 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0128 11:32:56.693296   43291 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15565-24808/.minikube CaCertPath:/Users/jenkins/minikube-integration/15565-24808/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15565-24808/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15565-24808/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15565-24808/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15565-24808/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15565-24808/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15565-24808/.minikube}
	I0128 11:32:56.693315   43291 ubuntu.go:177] setting up certificates
	I0128 11:32:56.693322   43291 provision.go:83] configureAuth start
	I0128 11:32:56.693394   43291 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-182000
	I0128 11:32:56.749139   43291 provision.go:138] copyHostCerts
	I0128 11:32:56.749229   43291 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-24808/.minikube/cert.pem, removing ...
	I0128 11:32:56.749237   43291 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-24808/.minikube/cert.pem
	I0128 11:32:56.749359   43291 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-24808/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15565-24808/.minikube/cert.pem (1123 bytes)
	I0128 11:32:56.749575   43291 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-24808/.minikube/key.pem, removing ...
	I0128 11:32:56.749581   43291 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-24808/.minikube/key.pem
	I0128 11:32:56.749646   43291 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-24808/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15565-24808/.minikube/key.pem (1675 bytes)
	I0128 11:32:56.749797   43291 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-24808/.minikube/ca.pem, removing ...
	I0128 11:32:56.749804   43291 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-24808/.minikube/ca.pem
	I0128 11:32:56.749878   43291 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-24808/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15565-24808/.minikube/ca.pem (1082 bytes)
	I0128 11:32:56.750002   43291 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15565-24808/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15565-24808/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15565-24808/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-182000 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube old-k8s-version-182000]
	I0128 11:32:56.933730   43291 provision.go:172] copyRemoteCerts
	I0128 11:32:56.933797   43291 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0128 11:32:56.933853   43291 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-182000
	I0128 11:32:56.990930   43291 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62637 SSHKeyPath:/Users/jenkins/minikube-integration/15565-24808/.minikube/machines/old-k8s-version-182000/id_rsa Username:docker}
	I0128 11:32:57.085401   43291 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-24808/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0128 11:32:57.103181   43291 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-24808/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0128 11:32:57.121037   43291 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-24808/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0128 11:32:57.138574   43291 provision.go:86] duration metric: configureAuth took 445.235355ms
	I0128 11:32:57.138590   43291 ubuntu.go:193] setting minikube options for container-runtime
	I0128 11:32:57.138746   43291 config.go:180] Loaded profile config "old-k8s-version-182000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0128 11:32:57.138812   43291 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-182000
	I0128 11:32:57.195810   43291 main.go:141] libmachine: Using SSH client type: native
	I0128 11:32:57.195970   43291 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 62637 <nil> <nil>}
	I0128 11:32:57.195987   43291 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0128 11:32:57.326259   43291 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0128 11:32:57.326271   43291 ubuntu.go:71] root file system type: overlay
	I0128 11:32:57.326447   43291 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0128 11:32:57.326529   43291 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-182000
	I0128 11:32:57.383509   43291 main.go:141] libmachine: Using SSH client type: native
	I0128 11:32:57.383674   43291 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 62637 <nil> <nil>}
	I0128 11:32:57.383726   43291 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0128 11:32:57.526437   43291 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0128 11:32:57.526541   43291 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-182000
	I0128 11:32:57.584704   43291 main.go:141] libmachine: Using SSH client type: native
	I0128 11:32:57.584863   43291 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 62637 <nil> <nil>}
	I0128 11:32:57.584877   43291 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0128 11:32:58.200397   43291 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2023-01-19 17:34:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-01-28 19:32:57.523301231 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0128 11:32:58.200419   43291 machine.go:91] provisioned docker machine in 1.897320579s
	I0128 11:32:58.200425   43291 client.go:171] LocalClient.Create took 10.360006757s
	I0128 11:32:58.200442   43291 start.go:167] duration metric: libmachine.API.Create for "old-k8s-version-182000" took 10.360093658s
	I0128 11:32:58.200447   43291 start.go:300] post-start starting for "old-k8s-version-182000" (driver="docker")
	I0128 11:32:58.200451   43291 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0128 11:32:58.200521   43291 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0128 11:32:58.200572   43291 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-182000
	I0128 11:32:58.259969   43291 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62637 SSHKeyPath:/Users/jenkins/minikube-integration/15565-24808/.minikube/machines/old-k8s-version-182000/id_rsa Username:docker}
	I0128 11:32:58.354161   43291 ssh_runner.go:195] Run: cat /etc/os-release
	I0128 11:32:58.357686   43291 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0128 11:32:58.357705   43291 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0128 11:32:58.357712   43291 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0128 11:32:58.357717   43291 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0128 11:32:58.357728   43291 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15565-24808/.minikube/addons for local assets ...
	I0128 11:32:58.357833   43291 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15565-24808/.minikube/files for local assets ...
	I0128 11:32:58.358003   43291 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15565-24808/.minikube/files/etc/ssl/certs/259822.pem -> 259822.pem in /etc/ssl/certs
	I0128 11:32:58.358200   43291 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0128 11:32:58.365617   43291 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-24808/.minikube/files/etc/ssl/certs/259822.pem --> /etc/ssl/certs/259822.pem (1708 bytes)
	I0128 11:32:58.383885   43291 start.go:303] post-start completed in 183.415148ms
	I0128 11:32:58.384492   43291 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-182000
	I0128 11:32:58.442928   43291 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/old-k8s-version-182000/config.json ...
	I0128 11:32:58.443369   43291 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0128 11:32:58.443434   43291 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-182000
	I0128 11:32:58.500202   43291 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62637 SSHKeyPath:/Users/jenkins/minikube-integration/15565-24808/.minikube/machines/old-k8s-version-182000/id_rsa Username:docker}
	I0128 11:32:58.590140   43291 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0128 11:32:58.594848   43291 start.go:128] duration metric: createHost completed in 10.794686188s
	I0128 11:32:58.594863   43291 start.go:83] releasing machines lock for "old-k8s-version-182000", held for 10.794822338s
	I0128 11:32:58.594939   43291 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-182000
	I0128 11:32:58.653699   43291 ssh_runner.go:195] Run: cat /version.json
	I0128 11:32:58.653712   43291 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0128 11:32:58.653811   43291 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-182000
	I0128 11:32:58.653813   43291 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-182000
	I0128 11:32:58.715039   43291 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62637 SSHKeyPath:/Users/jenkins/minikube-integration/15565-24808/.minikube/machines/old-k8s-version-182000/id_rsa Username:docker}
	I0128 11:32:58.715102   43291 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62637 SSHKeyPath:/Users/jenkins/minikube-integration/15565-24808/.minikube/machines/old-k8s-version-182000/id_rsa Username:docker}
	W0128 11:32:58.998023   43291 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.29.0 -> Actual minikube version: v1.29.0-1674856271-15565
	! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.29.0 -> Actual minikube version: v1.29.0-1674856271-15565
	I0128 11:32:58.998107   43291 ssh_runner.go:195] Run: systemctl --version
	I0128 11:32:59.003181   43291 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0128 11:32:59.008135   43291 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0128 11:32:59.029291   43291 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0128 11:32:59.029370   43291 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0128 11:32:59.043500   43291 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0128 11:32:59.051229   43291 cni.go:307] configured [/etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0128 11:32:59.051246   43291 start.go:483] detecting cgroup driver to use...
	I0128 11:32:59.051259   43291 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0128 11:32:59.051345   43291 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0128 11:32:59.065596   43291 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "k8s.gcr.io/pause:3.1"|' /etc/containerd/config.toml"
	I0128 11:32:59.074399   43291 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0128 11:32:59.083062   43291 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0128 11:32:59.083118   43291 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0128 11:32:59.091821   43291 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0128 11:32:59.100458   43291 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0128 11:32:59.109087   43291 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0128 11:32:59.117788   43291 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0128 11:32:59.125849   43291 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0128 11:32:59.134471   43291 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0128 11:32:59.141740   43291 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0128 11:32:59.148825   43291 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0128 11:32:59.219564   43291 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0128 11:32:59.299399   43291 start.go:483] detecting cgroup driver to use...
	I0128 11:32:59.299417   43291 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0128 11:32:59.299479   43291 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0128 11:32:59.310369   43291 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0128 11:32:59.310444   43291 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0128 11:32:59.320739   43291 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0128 11:32:59.335209   43291 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0128 11:32:59.429460   43291 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0128 11:32:59.511497   43291 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0128 11:32:59.511515   43291 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0128 11:32:59.525439   43291 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0128 11:32:59.614586   43291 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0128 11:32:59.822178   43291 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0128 11:32:59.853113   43291 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0128 11:32:59.934526   43291 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 20.10.23 ...
	I0128 11:32:59.934642   43291 cli_runner.go:164] Run: docker exec -t old-k8s-version-182000 dig +short host.docker.internal
	I0128 11:33:00.045011   43291 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0128 11:33:00.045118   43291 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0128 11:33:00.049464   43291 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0128 11:33:00.059625   43291 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-182000
	I0128 11:33:00.116731   43291 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0128 11:33:00.116806   43291 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0128 11:33:00.141860   43291 docker.go:630] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0128 11:33:00.141879   43291 docker.go:560] Images already preloaded, skipping extraction
	I0128 11:33:00.141970   43291 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0128 11:33:00.166304   43291 docker.go:630] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0128 11:33:00.166322   43291 cache_images.go:84] Images are preloaded, skipping loading
	I0128 11:33:00.166422   43291 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0128 11:33:00.233940   43291 cni.go:84] Creating CNI manager for ""
	I0128 11:33:00.233956   43291 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0128 11:33:00.233969   43291 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0128 11:33:00.233988   43291 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-182000 NodeName:old-k8s-version-182000 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0128 11:33:00.234110   43291 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "old-k8s-version-182000"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-182000
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.76.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0128 11:33:00.234197   43291 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=old-k8s-version-182000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-182000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0128 11:33:00.234264   43291 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0128 11:33:00.242247   43291 binaries.go:44] Found k8s binaries, skipping transfer
	I0128 11:33:00.242311   43291 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0128 11:33:00.250018   43291 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (348 bytes)
	I0128 11:33:00.263226   43291 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0128 11:33:00.276352   43291 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2174 bytes)
	I0128 11:33:00.289339   43291 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0128 11:33:00.293263   43291 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0128 11:33:00.303384   43291 certs.go:56] Setting up /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/old-k8s-version-182000 for IP: 192.168.76.2
	I0128 11:33:00.303406   43291 certs.go:186] acquiring lock for shared ca certs: {Name:mk223e4eab41546e140aa3e3e480564c04fddab4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0128 11:33:00.303590   43291 certs.go:195] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15565-24808/.minikube/ca.key
	I0128 11:33:00.303665   43291 certs.go:195] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15565-24808/.minikube/proxy-client-ca.key
	I0128 11:33:00.303703   43291 certs.go:315] generating minikube-user signed cert: /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/old-k8s-version-182000/client.key
	I0128 11:33:00.303721   43291 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/old-k8s-version-182000/client.crt with IP's: []
	I0128 11:33:00.439823   43291 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/old-k8s-version-182000/client.crt ...
	I0128 11:33:00.439841   43291 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/old-k8s-version-182000/client.crt: {Name:mk22df9d001a1acb80475565f9b9c54e4882863b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0128 11:33:00.440218   43291 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/old-k8s-version-182000/client.key ...
	I0128 11:33:00.440229   43291 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/old-k8s-version-182000/client.key: {Name:mk682d6e9148089516677da58edd3b0d3d114683 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0128 11:33:00.440422   43291 certs.go:315] generating minikube signed cert: /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/old-k8s-version-182000/apiserver.key.31bdca25
	I0128 11:33:00.440437   43291 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/old-k8s-version-182000/apiserver.crt.31bdca25 with IP's: [192.168.76.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0128 11:33:00.559952   43291 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/old-k8s-version-182000/apiserver.crt.31bdca25 ...
	I0128 11:33:00.559987   43291 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/old-k8s-version-182000/apiserver.crt.31bdca25: {Name:mka2529a6e235bc40f364eca6def7e7ddefdbc23 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0128 11:33:00.560293   43291 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/old-k8s-version-182000/apiserver.key.31bdca25 ...
	I0128 11:33:00.560301   43291 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/old-k8s-version-182000/apiserver.key.31bdca25: {Name:mk8ad98ff6c57e105e17bc95626c1af2dc13c403 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0128 11:33:00.560506   43291 certs.go:333] copying /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/old-k8s-version-182000/apiserver.crt.31bdca25 -> /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/old-k8s-version-182000/apiserver.crt
	I0128 11:33:00.560674   43291 certs.go:337] copying /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/old-k8s-version-182000/apiserver.key.31bdca25 -> /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/old-k8s-version-182000/apiserver.key
	I0128 11:33:00.560827   43291 certs.go:315] generating aggregator signed cert: /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/old-k8s-version-182000/proxy-client.key
	I0128 11:33:00.560841   43291 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/old-k8s-version-182000/proxy-client.crt with IP's: []
	I0128 11:33:00.854114   43291 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/old-k8s-version-182000/proxy-client.crt ...
	I0128 11:33:00.854130   43291 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/old-k8s-version-182000/proxy-client.crt: {Name:mk3940704c34895967bfeffaebabc0d303fc50bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0128 11:33:00.854437   43291 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/old-k8s-version-182000/proxy-client.key ...
	I0128 11:33:00.854448   43291 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/old-k8s-version-182000/proxy-client.key: {Name:mkfe4b87d8b6212353370fc9631900db1f6b6a9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0128 11:33:00.854869   43291 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-24808/.minikube/certs/Users/jenkins/minikube-integration/15565-24808/.minikube/certs/25982.pem (1338 bytes)
	W0128 11:33:00.854920   43291 certs.go:397] ignoring /Users/jenkins/minikube-integration/15565-24808/.minikube/certs/Users/jenkins/minikube-integration/15565-24808/.minikube/certs/25982_empty.pem, impossibly tiny 0 bytes
	I0128 11:33:00.854931   43291 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-24808/.minikube/certs/Users/jenkins/minikube-integration/15565-24808/.minikube/certs/ca-key.pem (1675 bytes)
	I0128 11:33:00.854968   43291 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-24808/.minikube/certs/Users/jenkins/minikube-integration/15565-24808/.minikube/certs/ca.pem (1082 bytes)
	I0128 11:33:00.855005   43291 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-24808/.minikube/certs/Users/jenkins/minikube-integration/15565-24808/.minikube/certs/cert.pem (1123 bytes)
	I0128 11:33:00.855042   43291 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-24808/.minikube/certs/Users/jenkins/minikube-integration/15565-24808/.minikube/certs/key.pem (1675 bytes)
	I0128 11:33:00.855115   43291 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-24808/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15565-24808/.minikube/files/etc/ssl/certs/259822.pem (1708 bytes)
	I0128 11:33:00.855653   43291 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/old-k8s-version-182000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0128 11:33:00.875585   43291 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/old-k8s-version-182000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0128 11:33:00.894656   43291 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/old-k8s-version-182000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0128 11:33:00.914535   43291 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/old-k8s-version-182000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0128 11:33:00.936949   43291 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-24808/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0128 11:33:00.954921   43291 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-24808/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0128 11:33:00.973069   43291 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-24808/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0128 11:33:00.990970   43291 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-24808/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0128 11:33:01.008848   43291 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-24808/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0128 11:33:01.026545   43291 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-24808/.minikube/certs/25982.pem --> /usr/share/ca-certificates/25982.pem (1338 bytes)
	I0128 11:33:01.044290   43291 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-24808/.minikube/files/etc/ssl/certs/259822.pem --> /usr/share/ca-certificates/259822.pem (1708 bytes)
	I0128 11:33:01.062084   43291 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (772 bytes)
	I0128 11:33:01.075699   43291 ssh_runner.go:195] Run: openssl version
	I0128 11:33:01.081127   43291 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0128 11:33:01.089563   43291 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0128 11:33:01.093863   43291 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Jan 28 18:40 /usr/share/ca-certificates/minikubeCA.pem
	I0128 11:33:01.093919   43291 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0128 11:33:01.099626   43291 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0128 11:33:01.108038   43291 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/25982.pem && ln -fs /usr/share/ca-certificates/25982.pem /etc/ssl/certs/25982.pem"
	I0128 11:33:01.116239   43291 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/25982.pem
	I0128 11:33:01.120414   43291 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Jan 28 18:44 /usr/share/ca-certificates/25982.pem
	I0128 11:33:01.120465   43291 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/25982.pem
	I0128 11:33:01.125943   43291 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/25982.pem /etc/ssl/certs/51391683.0"
	I0128 11:33:01.134234   43291 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/259822.pem && ln -fs /usr/share/ca-certificates/259822.pem /etc/ssl/certs/259822.pem"
	I0128 11:33:01.142606   43291 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/259822.pem
	I0128 11:33:01.146705   43291 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Jan 28 18:44 /usr/share/ca-certificates/259822.pem
	I0128 11:33:01.146753   43291 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/259822.pem
	I0128 11:33:01.152290   43291 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/259822.pem /etc/ssl/certs/3ec20f2e.0"
	I0128 11:33:01.160791   43291 kubeadm.go:401] StartCluster: {Name:old-k8s-version-182000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-182000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs
:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0128 11:33:01.160894   43291 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0128 11:33:01.184773   43291 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0128 11:33:01.193053   43291 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0128 11:33:01.200595   43291 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0128 11:33:01.200650   43291 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0128 11:33:01.208131   43291 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0128 11:33:01.208160   43291 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0128 11:33:01.255566   43291 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0128 11:33:01.255634   43291 kubeadm.go:322] [preflight] Running pre-flight checks
	I0128 11:33:01.565943   43291 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0128 11:33:01.566022   43291 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0128 11:33:01.566116   43291 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0128 11:33:01.796409   43291 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0128 11:33:01.797248   43291 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0128 11:33:01.803880   43291 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0128 11:33:01.865680   43291 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0128 11:33:01.886752   43291 out.go:204]   - Generating certificates and keys ...
	I0128 11:33:01.886831   43291 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0128 11:33:01.886901   43291 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0128 11:33:01.963790   43291 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0128 11:33:02.120509   43291 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0128 11:33:02.224603   43291 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0128 11:33:02.319860   43291 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0128 11:33:02.513294   43291 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0128 11:33:02.513444   43291 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [old-k8s-version-182000 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0128 11:33:02.643261   43291 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0128 11:33:02.643381   43291 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [old-k8s-version-182000 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0128 11:33:02.833867   43291 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0128 11:33:02.881480   43291 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0128 11:33:03.066740   43291 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0128 11:33:03.066811   43291 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0128 11:33:03.350006   43291 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0128 11:33:03.554020   43291 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0128 11:33:03.879140   43291 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0128 11:33:03.941799   43291 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0128 11:33:03.947197   43291 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0128 11:33:03.971541   43291 out.go:204]   - Booting up control plane ...
	I0128 11:33:03.971770   43291 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0128 11:33:03.971921   43291 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0128 11:33:03.972070   43291 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0128 11:33:03.972196   43291 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0128 11:33:03.972492   43291 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0128 11:33:43.954088   43291 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0128 11:33:43.954767   43291 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0128 11:33:43.954943   43291 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0128 11:33:48.955651   43291 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0128 11:33:48.955889   43291 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0128 11:33:58.957601   43291 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0128 11:33:58.957811   43291 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0128 11:34:18.958126   43291 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0128 11:34:18.958318   43291 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0128 11:34:58.959048   43291 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0128 11:34:58.959233   43291 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0128 11:34:58.959245   43291 kubeadm.go:322] 
	I0128 11:34:58.959272   43291 kubeadm.go:322] Unfortunately, an error has occurred:
	I0128 11:34:58.959304   43291 kubeadm.go:322] 	timed out waiting for the condition
	I0128 11:34:58.959308   43291 kubeadm.go:322] 
	I0128 11:34:58.959333   43291 kubeadm.go:322] This error is likely caused by:
	I0128 11:34:58.959361   43291 kubeadm.go:322] 	- The kubelet is not running
	I0128 11:34:58.959441   43291 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0128 11:34:58.959448   43291 kubeadm.go:322] 
	I0128 11:34:58.959527   43291 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0128 11:34:58.959561   43291 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0128 11:34:58.959600   43291 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0128 11:34:58.959613   43291 kubeadm.go:322] 
	I0128 11:34:58.959710   43291 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0128 11:34:58.959788   43291 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0128 11:34:58.959862   43291 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0128 11:34:58.959897   43291 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0128 11:34:58.959949   43291 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0128 11:34:58.959971   43291 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0128 11:34:58.962738   43291 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0128 11:34:58.962798   43291 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0128 11:34:58.962904   43291 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.23. Latest validated version: 18.09
	I0128 11:34:58.962986   43291 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0128 11:34:58.963049   43291 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0128 11:34:58.963118   43291 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	W0128 11:34:58.963283   43291 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [old-k8s-version-182000 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [old-k8s-version-182000 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.23. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [old-k8s-version-182000 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [old-k8s-version-182000 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.23. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0128 11:34:58.963325   43291 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0128 11:34:59.384104   43291 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0128 11:34:59.394344   43291 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0128 11:34:59.394398   43291 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0128 11:34:59.402131   43291 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0128 11:34:59.402154   43291 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0128 11:34:59.462667   43291 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0128 11:34:59.462707   43291 kubeadm.go:322] [preflight] Running pre-flight checks
	I0128 11:34:59.838201   43291 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0128 11:34:59.838302   43291 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0128 11:34:59.838404   43291 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0128 11:35:00.092722   43291 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0128 11:35:00.094472   43291 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0128 11:35:00.101304   43291 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0128 11:35:00.184377   43291 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0128 11:35:00.219469   43291 out.go:204]   - Generating certificates and keys ...
	I0128 11:35:00.219630   43291 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0128 11:35:00.219732   43291 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0128 11:35:00.219861   43291 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0128 11:35:00.219941   43291 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0128 11:35:00.220058   43291 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0128 11:35:00.220122   43291 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0128 11:35:00.220225   43291 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0128 11:35:00.220293   43291 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0128 11:35:00.220393   43291 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0128 11:35:00.220486   43291 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0128 11:35:00.220552   43291 kubeadm.go:322] [certs] Using the existing "sa" key
	I0128 11:35:00.220638   43291 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0128 11:35:00.352833   43291 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0128 11:35:00.413681   43291 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0128 11:35:00.508008   43291 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0128 11:35:00.550030   43291 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0128 11:35:00.551206   43291 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0128 11:35:00.573037   43291 out.go:204]   - Booting up control plane ...
	I0128 11:35:00.573180   43291 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0128 11:35:00.573279   43291 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0128 11:35:00.573369   43291 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0128 11:35:00.573462   43291 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0128 11:35:00.573643   43291 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0128 11:35:40.560875   43291 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0128 11:35:40.562075   43291 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0128 11:35:40.562336   43291 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0128 11:35:45.563908   43291 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0128 11:35:45.564137   43291 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0128 11:35:55.564621   43291 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0128 11:35:55.564797   43291 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0128 11:36:15.565375   43291 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0128 11:36:15.565569   43291 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0128 11:36:55.567407   43291 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0128 11:36:55.567612   43291 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0128 11:36:55.567623   43291 kubeadm.go:322] 
	I0128 11:36:55.567656   43291 kubeadm.go:322] Unfortunately, an error has occurred:
	I0128 11:36:55.567693   43291 kubeadm.go:322] 	timed out waiting for the condition
	I0128 11:36:55.567702   43291 kubeadm.go:322] 
	I0128 11:36:55.567740   43291 kubeadm.go:322] This error is likely caused by:
	I0128 11:36:55.567769   43291 kubeadm.go:322] 	- The kubelet is not running
	I0128 11:36:55.567935   43291 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0128 11:36:55.567949   43291 kubeadm.go:322] 
	I0128 11:36:55.568090   43291 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0128 11:36:55.568138   43291 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0128 11:36:55.568197   43291 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0128 11:36:55.568207   43291 kubeadm.go:322] 
	I0128 11:36:55.568344   43291 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0128 11:36:55.568481   43291 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0128 11:36:55.568639   43291 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0128 11:36:55.568712   43291 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0128 11:36:55.568799   43291 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0128 11:36:55.568840   43291 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0128 11:36:55.571229   43291 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0128 11:36:55.571293   43291 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0128 11:36:55.571400   43291 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.23. Latest validated version: 18.09
	I0128 11:36:55.571473   43291 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0128 11:36:55.571550   43291 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0128 11:36:55.571602   43291 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0128 11:36:55.571621   43291 kubeadm.go:403] StartCluster complete in 3m54.410251403s
	I0128 11:36:55.571726   43291 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0128 11:36:55.595118   43291 logs.go:279] 0 containers: []
	W0128 11:36:55.595130   43291 logs.go:281] No container was found matching "kube-apiserver"
	I0128 11:36:55.595197   43291 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0128 11:36:55.618420   43291 logs.go:279] 0 containers: []
	W0128 11:36:55.618434   43291 logs.go:281] No container was found matching "etcd"
	I0128 11:36:55.618506   43291 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0128 11:36:55.643289   43291 logs.go:279] 0 containers: []
	W0128 11:36:55.643302   43291 logs.go:281] No container was found matching "coredns"
	I0128 11:36:55.643369   43291 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0128 11:36:55.666197   43291 logs.go:279] 0 containers: []
	W0128 11:36:55.666210   43291 logs.go:281] No container was found matching "kube-scheduler"
	I0128 11:36:55.666277   43291 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0128 11:36:55.689976   43291 logs.go:279] 0 containers: []
	W0128 11:36:55.689990   43291 logs.go:281] No container was found matching "kube-proxy"
	I0128 11:36:55.690059   43291 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0128 11:36:55.712763   43291 logs.go:279] 0 containers: []
	W0128 11:36:55.712776   43291 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0128 11:36:55.712849   43291 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0128 11:36:55.736238   43291 logs.go:279] 0 containers: []
	W0128 11:36:55.736252   43291 logs.go:281] No container was found matching "storage-provisioner"
	I0128 11:36:55.736322   43291 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0128 11:36:55.760663   43291 logs.go:279] 0 containers: []
	W0128 11:36:55.760676   43291 logs.go:281] No container was found matching "kube-controller-manager"
	I0128 11:36:55.760683   43291 logs.go:124] Gathering logs for kubelet ...
	I0128 11:36:55.760690   43291 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0128 11:36:55.798991   43291 logs.go:124] Gathering logs for dmesg ...
	I0128 11:36:55.799003   43291 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0128 11:36:55.811451   43291 logs.go:124] Gathering logs for describe nodes ...
	I0128 11:36:55.811466   43291 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0128 11:36:55.902272   43291 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0128 11:36:55.902284   43291 logs.go:124] Gathering logs for Docker ...
	I0128 11:36:55.902290   43291 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0128 11:36:55.919801   43291 logs.go:124] Gathering logs for container status ...
	I0128 11:36:55.919817   43291 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0128 11:36:57.970987   43291 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.051152228s)
	W0128 11:36:57.971121   43291 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.23. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0128 11:36:57.971137   43291 out.go:239] * 
	* 
	W0128 11:36:57.971313   43291 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.23. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.23. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0128 11:36:57.971333   43291 out.go:239] * 
	* 
	W0128 11:36:57.971970   43291 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0128 11:36:58.035583   43291 out.go:177] 
	W0128 11:36:58.078681   43291 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.23. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.23. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0128 11:36:58.078856   43291 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0128 11:36:58.078935   43291 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0128 11:36:58.136307   43291 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-amd64 start -p old-k8s-version-182000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/FirstStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-182000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-182000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "617ab90eb0df8d30a74a5dc57ab16039882655d65e3025362322b78aeec379ac",
	        "Created": "2023-01-28T19:32:55.313551858Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 664489,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-01-28T19:32:55.598829839Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:01c0ce65fff70ab1f019aa14679c46b23331bd108ae899438e589673efaa9c00",
	        "ResolvConfPath": "/var/lib/docker/containers/617ab90eb0df8d30a74a5dc57ab16039882655d65e3025362322b78aeec379ac/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/617ab90eb0df8d30a74a5dc57ab16039882655d65e3025362322b78aeec379ac/hostname",
	        "HostsPath": "/var/lib/docker/containers/617ab90eb0df8d30a74a5dc57ab16039882655d65e3025362322b78aeec379ac/hosts",
	        "LogPath": "/var/lib/docker/containers/617ab90eb0df8d30a74a5dc57ab16039882655d65e3025362322b78aeec379ac/617ab90eb0df8d30a74a5dc57ab16039882655d65e3025362322b78aeec379ac-json.log",
	        "Name": "/old-k8s-version-182000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-182000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-182000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/cf358ab04dbdc1e02cf2f4479b1d8fec29dc7884849cab2c09927d672777c18a-init/diff:/var/lib/docker/overlay2/ebc03c916d1215717cc928cc2ae6bb5febcaf1787682b19b31688cb58ea354df/diff:/var/lib/docker/overlay2/aaa47387c6297b9482eaf2d8291628b9713643f21d066c37435b7e2cb9493e2a/diff:/var/lib/docker/overlay2/f4b2c82f60338b3f859441322400906b78ab112321f53e01c52ec81f29b4b492/diff:/var/lib/docker/overlay2/9425b655d46ca09e43b6484556a0c42b69e0c7947e14ec530546a61f36d3b950/diff:/var/lib/docker/overlay2/7d54571f62200ad4404fb9bb52649136f53eb6d6eedc5a51b22898df9001c1d4/diff:/var/lib/docker/overlay2/a4b4864baac235070d93e0940d897dd3006e6a93d705490108451f8d00ba148f/diff:/var/lib/docker/overlay2/8b092a30ffaf1c9230cef4864afb85d91ceb9fa92e484e3ebf7a31bb7df915bc/diff:/var/lib/docker/overlay2/96ac23e2e494a92e2287115c1a85e160e67543832baaaa3fa9a2351b370d5bd4/diff:/var/lib/docker/overlay2/c1e68f2d6c4ce95b33833a8d750a79aeaef16cc7d0a556369a63014eef7597b6/diff:/var/lib/docker/overlay2/89b3fe
fdd4bd8243826ccca31dec1aef9f91ad82adda108147b89c096792dfa5/diff:/var/lib/docker/overlay2/0b09be009751a25e4cbe64835151f1a814c4547d2c513994ae82f8093a22040d/diff:/var/lib/docker/overlay2/dc9a2b1667d67c8f0269966ef8862a4ffcfe4b68ad45f12e3ff27075c595c716/diff:/var/lib/docker/overlay2/d41ab03c6154f92111515bffc37c1d75570fa697ffa380631216096b52bfbc1b/diff:/var/lib/docker/overlay2/549b2cfc0a7d4f81f8d2624b1b2069b66d159ecd7b38148b476bb7a1b9e29100/diff:/var/lib/docker/overlay2/ecd7a1e2ce66c77afcf87a94383f14763eca5c8732c76b1b83765a278db91228/diff:/var/lib/docker/overlay2/6361f06734d312adc4271443765c435c4a7600356d1c6597fb7fa440cf1a2eb4/diff:/var/lib/docker/overlay2/cc7751a853d09ad130dccc1c835daa64e6ba830331636aca6a2a98da95ab52c1/diff:/var/lib/docker/overlay2/6612588f68e64e123a6e5cf6f6da339ee6072f8054f936be6d4f799d6c683e75/diff:/var/lib/docker/overlay2/673e42d3b5998d60bbb5c7c40da29902c3ea35068701966a7e3fd8a923d4a37a/diff:/var/lib/docker/overlay2/115d8a9e167d9b574c1d945d85d46da3ad2688595502524702976fc9b1051464/diff:/var/lib/d
ocker/overlay2/a8a2380c37eec6348eac27c7ee660b1f1d1ef94786cd68f197218066d99d80dd/diff:/var/lib/docker/overlay2/9261c5669bb687df6f9ad1ac00615cdf03b913ab9b3e1ca1a1f1cb6420702325/diff:/var/lib/docker/overlay2/46213bfa914da7941cec1c2c32185400a83c35a74274f39d74ad203ee5688535/diff:/var/lib/docker/overlay2/45ce48252aa0eeb54f2a1c27e570f8e85ac4a1d28a947b81618e608c64e3a700/diff:/var/lib/docker/overlay2/5631fae0fb00254444e3cc059b8b6062ee02fd66eefdf043970883f6724ce682/diff:/var/lib/docker/overlay2/e23ece345ff4dee7248a8e8cbd15cdbaef319d286a6490377fc337feecd6be04/diff:/var/lib/docker/overlay2/004bedb9de21965ae003d62b64a9e6506a10afa328b9af469eb51d3920d9c3b6/diff:/var/lib/docker/overlay2/c0ed692b610507b4315c2a43e64bd682bfdae35a7b4bcba499bba9cfb33121c4/diff:/var/lib/docker/overlay2/8396057830d1ed01256a5ee803b6310c8bf4c6ef3fb0f958240557352a12f3db/diff:/var/lib/docker/overlay2/c8024a29733fe87d5aad124df5ff33e97bcca94ee9fee196a6d51c9474692733/diff:/var/lib/docker/overlay2/9e59b455e481cdabd17790daddef6872e7b6452d1e8de1526998d92ab5f
c008f/diff:/var/lib/docker/overlay2/88cc3ecb1b979acbac3227fd30f3e879629eff2b47f416b3069463900f3e40e0/diff:/var/lib/docker/overlay2/5ef1713ef4e296c4637ccd2823c2b80cb5c53cd757947ff3fc17b7dd2d2dd21c/diff:/var/lib/docker/overlay2/17a697eb9c335b2a20567e3615e2222a113542532402dc62978ff64d65860c5e/diff:/var/lib/docker/overlay2/69e01a154090c42cbf63b88c7e922d483dd2d393fbab64725f79b3ff3800c3c1/diff:/var/lib/docker/overlay2/6ed77ee7b45230567431b0cbfb9cefedfd3f3d7eecf271f20a711bbcc4fdb1b3/diff:/var/lib/docker/overlay2/3bf095c6d6fe582e91d9a9ab0dc5b4d168f93f28ec2488a88f60b63ebf1e22f7/diff:/var/lib/docker/overlay2/cfc3bbbdc2702c8d23d146885b4da1a4482e8af461b5c87426fab855f97417a0/diff:/var/lib/docker/overlay2/1c4944ff8930ced790954d78530aeaf94eeb6c7367b474bdfbad30345cc1276a/diff:/var/lib/docker/overlay2/44cf435555d16eb68c4149bc53e4ae11797c7ddb429332f3d0d36328cb16ea5f/diff:/var/lib/docker/overlay2/4a7b4287594c4da981df984cd6e3910778bfdff2b5560a03d6cdcb589790c8e5/diff:/var/lib/docker/overlay2/76c287aa1bd3a7c3636e82df1bac8ead485e55
7a0fd68fdbfc0d5655d89f7113/diff:/var/lib/docker/overlay2/a2ab65056651b30980d6df9664f682519df2c2fc604d87ddb2bb2ca25b663d5e/diff:/var/lib/docker/overlay2/3a84daa5ad43dd7c27d884672613e37b8a5bed1fa79edee0e951b2e3fa39f21f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cf358ab04dbdc1e02cf2f4479b1d8fec29dc7884849cab2c09927d672777c18a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cf358ab04dbdc1e02cf2f4479b1d8fec29dc7884849cab2c09927d672777c18a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cf358ab04dbdc1e02cf2f4479b1d8fec29dc7884849cab2c09927d672777c18a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-182000",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-182000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-182000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-182000",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-182000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "de26b70de179023ab92799de40ef1eeb652a8d446f418d9a014b2020f75ff7b5",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62637"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62638"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62639"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62635"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62636"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/de26b70de179",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-182000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "617ab90eb0df",
	                        "old-k8s-version-182000"
	                    ],
	                    "NetworkID": "56bfdf73bec9b0196848fd6c701661b6f09d89a5213236097da597daf246c910",
	                    "EndpointID": "f469f3373bcad5db987b8fe4d4eca778cf1ffe79ecee6ea42cb7f1d2530a653e",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-182000 -n old-k8s-version-182000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-182000 -n old-k8s-version-182000: exit status 6 (406.406422ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0128 11:36:58.688607   44386 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-182000" does not appear in /Users/jenkins/minikube-integration/15565-24808/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-182000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (251.93s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.98s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-182000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-182000 create -f testdata/busybox.yaml: exit status 1 (34.960772ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-182000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-182000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-182000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-182000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "617ab90eb0df8d30a74a5dc57ab16039882655d65e3025362322b78aeec379ac",
	        "Created": "2023-01-28T19:32:55.313551858Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 664489,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-01-28T19:32:55.598829839Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:01c0ce65fff70ab1f019aa14679c46b23331bd108ae899438e589673efaa9c00",
	        "ResolvConfPath": "/var/lib/docker/containers/617ab90eb0df8d30a74a5dc57ab16039882655d65e3025362322b78aeec379ac/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/617ab90eb0df8d30a74a5dc57ab16039882655d65e3025362322b78aeec379ac/hostname",
	        "HostsPath": "/var/lib/docker/containers/617ab90eb0df8d30a74a5dc57ab16039882655d65e3025362322b78aeec379ac/hosts",
	        "LogPath": "/var/lib/docker/containers/617ab90eb0df8d30a74a5dc57ab16039882655d65e3025362322b78aeec379ac/617ab90eb0df8d30a74a5dc57ab16039882655d65e3025362322b78aeec379ac-json.log",
	        "Name": "/old-k8s-version-182000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-182000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-182000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/cf358ab04dbdc1e02cf2f4479b1d8fec29dc7884849cab2c09927d672777c18a-init/diff:/var/lib/docker/overlay2/ebc03c916d1215717cc928cc2ae6bb5febcaf1787682b19b31688cb58ea354df/diff:/var/lib/docker/overlay2/aaa47387c6297b9482eaf2d8291628b9713643f21d066c37435b7e2cb9493e2a/diff:/var/lib/docker/overlay2/f4b2c82f60338b3f859441322400906b78ab112321f53e01c52ec81f29b4b492/diff:/var/lib/docker/overlay2/9425b655d46ca09e43b6484556a0c42b69e0c7947e14ec530546a61f36d3b950/diff:/var/lib/docker/overlay2/7d54571f62200ad4404fb9bb52649136f53eb6d6eedc5a51b22898df9001c1d4/diff:/var/lib/docker/overlay2/a4b4864baac235070d93e0940d897dd3006e6a93d705490108451f8d00ba148f/diff:/var/lib/docker/overlay2/8b092a30ffaf1c9230cef4864afb85d91ceb9fa92e484e3ebf7a31bb7df915bc/diff:/var/lib/docker/overlay2/96ac23e2e494a92e2287115c1a85e160e67543832baaaa3fa9a2351b370d5bd4/diff:/var/lib/docker/overlay2/c1e68f2d6c4ce95b33833a8d750a79aeaef16cc7d0a556369a63014eef7597b6/diff:/var/lib/docker/overlay2/89b3fe
fdd4bd8243826ccca31dec1aef9f91ad82adda108147b89c096792dfa5/diff:/var/lib/docker/overlay2/0b09be009751a25e4cbe64835151f1a814c4547d2c513994ae82f8093a22040d/diff:/var/lib/docker/overlay2/dc9a2b1667d67c8f0269966ef8862a4ffcfe4b68ad45f12e3ff27075c595c716/diff:/var/lib/docker/overlay2/d41ab03c6154f92111515bffc37c1d75570fa697ffa380631216096b52bfbc1b/diff:/var/lib/docker/overlay2/549b2cfc0a7d4f81f8d2624b1b2069b66d159ecd7b38148b476bb7a1b9e29100/diff:/var/lib/docker/overlay2/ecd7a1e2ce66c77afcf87a94383f14763eca5c8732c76b1b83765a278db91228/diff:/var/lib/docker/overlay2/6361f06734d312adc4271443765c435c4a7600356d1c6597fb7fa440cf1a2eb4/diff:/var/lib/docker/overlay2/cc7751a853d09ad130dccc1c835daa64e6ba830331636aca6a2a98da95ab52c1/diff:/var/lib/docker/overlay2/6612588f68e64e123a6e5cf6f6da339ee6072f8054f936be6d4f799d6c683e75/diff:/var/lib/docker/overlay2/673e42d3b5998d60bbb5c7c40da29902c3ea35068701966a7e3fd8a923d4a37a/diff:/var/lib/docker/overlay2/115d8a9e167d9b574c1d945d85d46da3ad2688595502524702976fc9b1051464/diff:/var/lib/d
ocker/overlay2/a8a2380c37eec6348eac27c7ee660b1f1d1ef94786cd68f197218066d99d80dd/diff:/var/lib/docker/overlay2/9261c5669bb687df6f9ad1ac00615cdf03b913ab9b3e1ca1a1f1cb6420702325/diff:/var/lib/docker/overlay2/46213bfa914da7941cec1c2c32185400a83c35a74274f39d74ad203ee5688535/diff:/var/lib/docker/overlay2/45ce48252aa0eeb54f2a1c27e570f8e85ac4a1d28a947b81618e608c64e3a700/diff:/var/lib/docker/overlay2/5631fae0fb00254444e3cc059b8b6062ee02fd66eefdf043970883f6724ce682/diff:/var/lib/docker/overlay2/e23ece345ff4dee7248a8e8cbd15cdbaef319d286a6490377fc337feecd6be04/diff:/var/lib/docker/overlay2/004bedb9de21965ae003d62b64a9e6506a10afa328b9af469eb51d3920d9c3b6/diff:/var/lib/docker/overlay2/c0ed692b610507b4315c2a43e64bd682bfdae35a7b4bcba499bba9cfb33121c4/diff:/var/lib/docker/overlay2/8396057830d1ed01256a5ee803b6310c8bf4c6ef3fb0f958240557352a12f3db/diff:/var/lib/docker/overlay2/c8024a29733fe87d5aad124df5ff33e97bcca94ee9fee196a6d51c9474692733/diff:/var/lib/docker/overlay2/9e59b455e481cdabd17790daddef6872e7b6452d1e8de1526998d92ab5f
c008f/diff:/var/lib/docker/overlay2/88cc3ecb1b979acbac3227fd30f3e879629eff2b47f416b3069463900f3e40e0/diff:/var/lib/docker/overlay2/5ef1713ef4e296c4637ccd2823c2b80cb5c53cd757947ff3fc17b7dd2d2dd21c/diff:/var/lib/docker/overlay2/17a697eb9c335b2a20567e3615e2222a113542532402dc62978ff64d65860c5e/diff:/var/lib/docker/overlay2/69e01a154090c42cbf63b88c7e922d483dd2d393fbab64725f79b3ff3800c3c1/diff:/var/lib/docker/overlay2/6ed77ee7b45230567431b0cbfb9cefedfd3f3d7eecf271f20a711bbcc4fdb1b3/diff:/var/lib/docker/overlay2/3bf095c6d6fe582e91d9a9ab0dc5b4d168f93f28ec2488a88f60b63ebf1e22f7/diff:/var/lib/docker/overlay2/cfc3bbbdc2702c8d23d146885b4da1a4482e8af461b5c87426fab855f97417a0/diff:/var/lib/docker/overlay2/1c4944ff8930ced790954d78530aeaf94eeb6c7367b474bdfbad30345cc1276a/diff:/var/lib/docker/overlay2/44cf435555d16eb68c4149bc53e4ae11797c7ddb429332f3d0d36328cb16ea5f/diff:/var/lib/docker/overlay2/4a7b4287594c4da981df984cd6e3910778bfdff2b5560a03d6cdcb589790c8e5/diff:/var/lib/docker/overlay2/76c287aa1bd3a7c3636e82df1bac8ead485e55
7a0fd68fdbfc0d5655d89f7113/diff:/var/lib/docker/overlay2/a2ab65056651b30980d6df9664f682519df2c2fc604d87ddb2bb2ca25b663d5e/diff:/var/lib/docker/overlay2/3a84daa5ad43dd7c27d884672613e37b8a5bed1fa79edee0e951b2e3fa39f21f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cf358ab04dbdc1e02cf2f4479b1d8fec29dc7884849cab2c09927d672777c18a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cf358ab04dbdc1e02cf2f4479b1d8fec29dc7884849cab2c09927d672777c18a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cf358ab04dbdc1e02cf2f4479b1d8fec29dc7884849cab2c09927d672777c18a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-182000",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-182000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-182000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-182000",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-182000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "de26b70de179023ab92799de40ef1eeb652a8d446f418d9a014b2020f75ff7b5",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62637"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62638"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62639"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62635"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62636"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/de26b70de179",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-182000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "617ab90eb0df",
	                        "old-k8s-version-182000"
	                    ],
	                    "NetworkID": "56bfdf73bec9b0196848fd6c701661b6f09d89a5213236097da597daf246c910",
	                    "EndpointID": "f469f3373bcad5db987b8fe4d4eca778cf1ffe79ecee6ea42cb7f1d2530a653e",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-182000 -n old-k8s-version-182000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-182000 -n old-k8s-version-182000: exit status 6 (407.649862ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0128 11:36:59.190771   44401 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-182000" does not appear in /Users/jenkins/minikube-integration/15565-24808/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-182000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-182000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-182000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "617ab90eb0df8d30a74a5dc57ab16039882655d65e3025362322b78aeec379ac",
	        "Created": "2023-01-28T19:32:55.313551858Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 664489,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-01-28T19:32:55.598829839Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:01c0ce65fff70ab1f019aa14679c46b23331bd108ae899438e589673efaa9c00",
	        "ResolvConfPath": "/var/lib/docker/containers/617ab90eb0df8d30a74a5dc57ab16039882655d65e3025362322b78aeec379ac/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/617ab90eb0df8d30a74a5dc57ab16039882655d65e3025362322b78aeec379ac/hostname",
	        "HostsPath": "/var/lib/docker/containers/617ab90eb0df8d30a74a5dc57ab16039882655d65e3025362322b78aeec379ac/hosts",
	        "LogPath": "/var/lib/docker/containers/617ab90eb0df8d30a74a5dc57ab16039882655d65e3025362322b78aeec379ac/617ab90eb0df8d30a74a5dc57ab16039882655d65e3025362322b78aeec379ac-json.log",
	        "Name": "/old-k8s-version-182000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-182000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-182000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/cf358ab04dbdc1e02cf2f4479b1d8fec29dc7884849cab2c09927d672777c18a-init/diff:/var/lib/docker/overlay2/ebc03c916d1215717cc928cc2ae6bb5febcaf1787682b19b31688cb58ea354df/diff:/var/lib/docker/overlay2/aaa47387c6297b9482eaf2d8291628b9713643f21d066c37435b7e2cb9493e2a/diff:/var/lib/docker/overlay2/f4b2c82f60338b3f859441322400906b78ab112321f53e01c52ec81f29b4b492/diff:/var/lib/docker/overlay2/9425b655d46ca09e43b6484556a0c42b69e0c7947e14ec530546a61f36d3b950/diff:/var/lib/docker/overlay2/7d54571f62200ad4404fb9bb52649136f53eb6d6eedc5a51b22898df9001c1d4/diff:/var/lib/docker/overlay2/a4b4864baac235070d93e0940d897dd3006e6a93d705490108451f8d00ba148f/diff:/var/lib/docker/overlay2/8b092a30ffaf1c9230cef4864afb85d91ceb9fa92e484e3ebf7a31bb7df915bc/diff:/var/lib/docker/overlay2/96ac23e2e494a92e2287115c1a85e160e67543832baaaa3fa9a2351b370d5bd4/diff:/var/lib/docker/overlay2/c1e68f2d6c4ce95b33833a8d750a79aeaef16cc7d0a556369a63014eef7597b6/diff:/var/lib/docker/overlay2/89b3fe
fdd4bd8243826ccca31dec1aef9f91ad82adda108147b89c096792dfa5/diff:/var/lib/docker/overlay2/0b09be009751a25e4cbe64835151f1a814c4547d2c513994ae82f8093a22040d/diff:/var/lib/docker/overlay2/dc9a2b1667d67c8f0269966ef8862a4ffcfe4b68ad45f12e3ff27075c595c716/diff:/var/lib/docker/overlay2/d41ab03c6154f92111515bffc37c1d75570fa697ffa380631216096b52bfbc1b/diff:/var/lib/docker/overlay2/549b2cfc0a7d4f81f8d2624b1b2069b66d159ecd7b38148b476bb7a1b9e29100/diff:/var/lib/docker/overlay2/ecd7a1e2ce66c77afcf87a94383f14763eca5c8732c76b1b83765a278db91228/diff:/var/lib/docker/overlay2/6361f06734d312adc4271443765c435c4a7600356d1c6597fb7fa440cf1a2eb4/diff:/var/lib/docker/overlay2/cc7751a853d09ad130dccc1c835daa64e6ba830331636aca6a2a98da95ab52c1/diff:/var/lib/docker/overlay2/6612588f68e64e123a6e5cf6f6da339ee6072f8054f936be6d4f799d6c683e75/diff:/var/lib/docker/overlay2/673e42d3b5998d60bbb5c7c40da29902c3ea35068701966a7e3fd8a923d4a37a/diff:/var/lib/docker/overlay2/115d8a9e167d9b574c1d945d85d46da3ad2688595502524702976fc9b1051464/diff:/var/lib/d
ocker/overlay2/a8a2380c37eec6348eac27c7ee660b1f1d1ef94786cd68f197218066d99d80dd/diff:/var/lib/docker/overlay2/9261c5669bb687df6f9ad1ac00615cdf03b913ab9b3e1ca1a1f1cb6420702325/diff:/var/lib/docker/overlay2/46213bfa914da7941cec1c2c32185400a83c35a74274f39d74ad203ee5688535/diff:/var/lib/docker/overlay2/45ce48252aa0eeb54f2a1c27e570f8e85ac4a1d28a947b81618e608c64e3a700/diff:/var/lib/docker/overlay2/5631fae0fb00254444e3cc059b8b6062ee02fd66eefdf043970883f6724ce682/diff:/var/lib/docker/overlay2/e23ece345ff4dee7248a8e8cbd15cdbaef319d286a6490377fc337feecd6be04/diff:/var/lib/docker/overlay2/004bedb9de21965ae003d62b64a9e6506a10afa328b9af469eb51d3920d9c3b6/diff:/var/lib/docker/overlay2/c0ed692b610507b4315c2a43e64bd682bfdae35a7b4bcba499bba9cfb33121c4/diff:/var/lib/docker/overlay2/8396057830d1ed01256a5ee803b6310c8bf4c6ef3fb0f958240557352a12f3db/diff:/var/lib/docker/overlay2/c8024a29733fe87d5aad124df5ff33e97bcca94ee9fee196a6d51c9474692733/diff:/var/lib/docker/overlay2/9e59b455e481cdabd17790daddef6872e7b6452d1e8de1526998d92ab5f
c008f/diff:/var/lib/docker/overlay2/88cc3ecb1b979acbac3227fd30f3e879629eff2b47f416b3069463900f3e40e0/diff:/var/lib/docker/overlay2/5ef1713ef4e296c4637ccd2823c2b80cb5c53cd757947ff3fc17b7dd2d2dd21c/diff:/var/lib/docker/overlay2/17a697eb9c335b2a20567e3615e2222a113542532402dc62978ff64d65860c5e/diff:/var/lib/docker/overlay2/69e01a154090c42cbf63b88c7e922d483dd2d393fbab64725f79b3ff3800c3c1/diff:/var/lib/docker/overlay2/6ed77ee7b45230567431b0cbfb9cefedfd3f3d7eecf271f20a711bbcc4fdb1b3/diff:/var/lib/docker/overlay2/3bf095c6d6fe582e91d9a9ab0dc5b4d168f93f28ec2488a88f60b63ebf1e22f7/diff:/var/lib/docker/overlay2/cfc3bbbdc2702c8d23d146885b4da1a4482e8af461b5c87426fab855f97417a0/diff:/var/lib/docker/overlay2/1c4944ff8930ced790954d78530aeaf94eeb6c7367b474bdfbad30345cc1276a/diff:/var/lib/docker/overlay2/44cf435555d16eb68c4149bc53e4ae11797c7ddb429332f3d0d36328cb16ea5f/diff:/var/lib/docker/overlay2/4a7b4287594c4da981df984cd6e3910778bfdff2b5560a03d6cdcb589790c8e5/diff:/var/lib/docker/overlay2/76c287aa1bd3a7c3636e82df1bac8ead485e55
7a0fd68fdbfc0d5655d89f7113/diff:/var/lib/docker/overlay2/a2ab65056651b30980d6df9664f682519df2c2fc604d87ddb2bb2ca25b663d5e/diff:/var/lib/docker/overlay2/3a84daa5ad43dd7c27d884672613e37b8a5bed1fa79edee0e951b2e3fa39f21f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cf358ab04dbdc1e02cf2f4479b1d8fec29dc7884849cab2c09927d672777c18a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cf358ab04dbdc1e02cf2f4479b1d8fec29dc7884849cab2c09927d672777c18a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cf358ab04dbdc1e02cf2f4479b1d8fec29dc7884849cab2c09927d672777c18a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-182000",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-182000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-182000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-182000",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-182000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "de26b70de179023ab92799de40ef1eeb652a8d446f418d9a014b2020f75ff7b5",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62637"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62638"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62639"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62635"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62636"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/de26b70de179",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-182000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "617ab90eb0df",
	                        "old-k8s-version-182000"
	                    ],
	                    "NetworkID": "56bfdf73bec9b0196848fd6c701661b6f09d89a5213236097da597daf246c910",
	                    "EndpointID": "f469f3373bcad5db987b8fe4d4eca778cf1ffe79ecee6ea42cb7f1d2530a653e",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-182000 -n old-k8s-version-182000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-182000 -n old-k8s-version-182000: exit status 6 (420.477128ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0128 11:36:59.669004   44413 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-182000" does not appear in /Users/jenkins/minikube-integration/15565-24808/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-182000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.98s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (89.71s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p old-k8s-version-182000 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0128 11:37:00.293898   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/bridge-732000/client.crt: no such file or directory
E0128 11:37:03.469156   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/addons-582000/client.crt: no such file or directory
E0128 11:37:07.315517   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/functional-251000/client.crt: no such file or directory
E0128 11:37:09.277517   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/kubenet-732000/client.crt: no such file or directory
E0128 11:37:09.283355   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/kubenet-732000/client.crt: no such file or directory
E0128 11:37:09.293648   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/kubenet-732000/client.crt: no such file or directory
E0128 11:37:09.313939   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/kubenet-732000/client.crt: no such file or directory
E0128 11:37:09.354393   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/kubenet-732000/client.crt: no such file or directory
E0128 11:37:09.434665   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/kubenet-732000/client.crt: no such file or directory
E0128 11:37:09.595911   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/kubenet-732000/client.crt: no such file or directory
E0128 11:37:09.916083   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/kubenet-732000/client.crt: no such file or directory
E0128 11:37:10.191958   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/custom-flannel-732000/client.crt: no such file or directory
E0128 11:37:10.556572   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/kubenet-732000/client.crt: no such file or directory
E0128 11:37:11.837533   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/kubenet-732000/client.crt: no such file or directory
E0128 11:37:14.398403   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/kubenet-732000/client.crt: no such file or directory
E0128 11:37:19.520199   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/kubenet-732000/client.crt: no such file or directory
E0128 11:37:20.774268   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/bridge-732000/client.crt: no such file or directory
E0128 11:37:29.760795   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/kubenet-732000/client.crt: no such file or directory
E0128 11:37:45.448620   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/flannel-732000/client.crt: no such file or directory
E0128 11:37:50.241512   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/kubenet-732000/client.crt: no such file or directory
E0128 11:38:01.736345   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/bridge-732000/client.crt: no such file or directory
E0128 11:38:16.808806   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/enable-default-cni-732000/client.crt: no such file or directory
E0128 11:38:18.501209   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/false-732000/client.crt: no such file or directory
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons enable metrics-server -p old-k8s-version-182000 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m29.200304478s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	unable to recognize "/etc/kubernetes/addons/metrics-apiservice.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-deployment.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-service.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	]
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-darwin-amd64 addons enable metrics-server -p old-k8s-version-182000 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-182000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-182000 describe deploy/metrics-server -n kube-system: exit status 1 (35.331285ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-182000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-182000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-182000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-182000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "617ab90eb0df8d30a74a5dc57ab16039882655d65e3025362322b78aeec379ac",
	        "Created": "2023-01-28T19:32:55.313551858Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 664489,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-01-28T19:32:55.598829839Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:01c0ce65fff70ab1f019aa14679c46b23331bd108ae899438e589673efaa9c00",
	        "ResolvConfPath": "/var/lib/docker/containers/617ab90eb0df8d30a74a5dc57ab16039882655d65e3025362322b78aeec379ac/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/617ab90eb0df8d30a74a5dc57ab16039882655d65e3025362322b78aeec379ac/hostname",
	        "HostsPath": "/var/lib/docker/containers/617ab90eb0df8d30a74a5dc57ab16039882655d65e3025362322b78aeec379ac/hosts",
	        "LogPath": "/var/lib/docker/containers/617ab90eb0df8d30a74a5dc57ab16039882655d65e3025362322b78aeec379ac/617ab90eb0df8d30a74a5dc57ab16039882655d65e3025362322b78aeec379ac-json.log",
	        "Name": "/old-k8s-version-182000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-182000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-182000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/cf358ab04dbdc1e02cf2f4479b1d8fec29dc7884849cab2c09927d672777c18a-init/diff:/var/lib/docker/overlay2/ebc03c916d1215717cc928cc2ae6bb5febcaf1787682b19b31688cb58ea354df/diff:/var/lib/docker/overlay2/aaa47387c6297b9482eaf2d8291628b9713643f21d066c37435b7e2cb9493e2a/diff:/var/lib/docker/overlay2/f4b2c82f60338b3f859441322400906b78ab112321f53e01c52ec81f29b4b492/diff:/var/lib/docker/overlay2/9425b655d46ca09e43b6484556a0c42b69e0c7947e14ec530546a61f36d3b950/diff:/var/lib/docker/overlay2/7d54571f62200ad4404fb9bb52649136f53eb6d6eedc5a51b22898df9001c1d4/diff:/var/lib/docker/overlay2/a4b4864baac235070d93e0940d897dd3006e6a93d705490108451f8d00ba148f/diff:/var/lib/docker/overlay2/8b092a30ffaf1c9230cef4864afb85d91ceb9fa92e484e3ebf7a31bb7df915bc/diff:/var/lib/docker/overlay2/96ac23e2e494a92e2287115c1a85e160e67543832baaaa3fa9a2351b370d5bd4/diff:/var/lib/docker/overlay2/c1e68f2d6c4ce95b33833a8d750a79aeaef16cc7d0a556369a63014eef7597b6/diff:/var/lib/docker/overlay2/89b3fe
fdd4bd8243826ccca31dec1aef9f91ad82adda108147b89c096792dfa5/diff:/var/lib/docker/overlay2/0b09be009751a25e4cbe64835151f1a814c4547d2c513994ae82f8093a22040d/diff:/var/lib/docker/overlay2/dc9a2b1667d67c8f0269966ef8862a4ffcfe4b68ad45f12e3ff27075c595c716/diff:/var/lib/docker/overlay2/d41ab03c6154f92111515bffc37c1d75570fa697ffa380631216096b52bfbc1b/diff:/var/lib/docker/overlay2/549b2cfc0a7d4f81f8d2624b1b2069b66d159ecd7b38148b476bb7a1b9e29100/diff:/var/lib/docker/overlay2/ecd7a1e2ce66c77afcf87a94383f14763eca5c8732c76b1b83765a278db91228/diff:/var/lib/docker/overlay2/6361f06734d312adc4271443765c435c4a7600356d1c6597fb7fa440cf1a2eb4/diff:/var/lib/docker/overlay2/cc7751a853d09ad130dccc1c835daa64e6ba830331636aca6a2a98da95ab52c1/diff:/var/lib/docker/overlay2/6612588f68e64e123a6e5cf6f6da339ee6072f8054f936be6d4f799d6c683e75/diff:/var/lib/docker/overlay2/673e42d3b5998d60bbb5c7c40da29902c3ea35068701966a7e3fd8a923d4a37a/diff:/var/lib/docker/overlay2/115d8a9e167d9b574c1d945d85d46da3ad2688595502524702976fc9b1051464/diff:/var/lib/d
ocker/overlay2/a8a2380c37eec6348eac27c7ee660b1f1d1ef94786cd68f197218066d99d80dd/diff:/var/lib/docker/overlay2/9261c5669bb687df6f9ad1ac00615cdf03b913ab9b3e1ca1a1f1cb6420702325/diff:/var/lib/docker/overlay2/46213bfa914da7941cec1c2c32185400a83c35a74274f39d74ad203ee5688535/diff:/var/lib/docker/overlay2/45ce48252aa0eeb54f2a1c27e570f8e85ac4a1d28a947b81618e608c64e3a700/diff:/var/lib/docker/overlay2/5631fae0fb00254444e3cc059b8b6062ee02fd66eefdf043970883f6724ce682/diff:/var/lib/docker/overlay2/e23ece345ff4dee7248a8e8cbd15cdbaef319d286a6490377fc337feecd6be04/diff:/var/lib/docker/overlay2/004bedb9de21965ae003d62b64a9e6506a10afa328b9af469eb51d3920d9c3b6/diff:/var/lib/docker/overlay2/c0ed692b610507b4315c2a43e64bd682bfdae35a7b4bcba499bba9cfb33121c4/diff:/var/lib/docker/overlay2/8396057830d1ed01256a5ee803b6310c8bf4c6ef3fb0f958240557352a12f3db/diff:/var/lib/docker/overlay2/c8024a29733fe87d5aad124df5ff33e97bcca94ee9fee196a6d51c9474692733/diff:/var/lib/docker/overlay2/9e59b455e481cdabd17790daddef6872e7b6452d1e8de1526998d92ab5f
c008f/diff:/var/lib/docker/overlay2/88cc3ecb1b979acbac3227fd30f3e879629eff2b47f416b3069463900f3e40e0/diff:/var/lib/docker/overlay2/5ef1713ef4e296c4637ccd2823c2b80cb5c53cd757947ff3fc17b7dd2d2dd21c/diff:/var/lib/docker/overlay2/17a697eb9c335b2a20567e3615e2222a113542532402dc62978ff64d65860c5e/diff:/var/lib/docker/overlay2/69e01a154090c42cbf63b88c7e922d483dd2d393fbab64725f79b3ff3800c3c1/diff:/var/lib/docker/overlay2/6ed77ee7b45230567431b0cbfb9cefedfd3f3d7eecf271f20a711bbcc4fdb1b3/diff:/var/lib/docker/overlay2/3bf095c6d6fe582e91d9a9ab0dc5b4d168f93f28ec2488a88f60b63ebf1e22f7/diff:/var/lib/docker/overlay2/cfc3bbbdc2702c8d23d146885b4da1a4482e8af461b5c87426fab855f97417a0/diff:/var/lib/docker/overlay2/1c4944ff8930ced790954d78530aeaf94eeb6c7367b474bdfbad30345cc1276a/diff:/var/lib/docker/overlay2/44cf435555d16eb68c4149bc53e4ae11797c7ddb429332f3d0d36328cb16ea5f/diff:/var/lib/docker/overlay2/4a7b4287594c4da981df984cd6e3910778bfdff2b5560a03d6cdcb589790c8e5/diff:/var/lib/docker/overlay2/76c287aa1bd3a7c3636e82df1bac8ead485e55
7a0fd68fdbfc0d5655d89f7113/diff:/var/lib/docker/overlay2/a2ab65056651b30980d6df9664f682519df2c2fc604d87ddb2bb2ca25b663d5e/diff:/var/lib/docker/overlay2/3a84daa5ad43dd7c27d884672613e37b8a5bed1fa79edee0e951b2e3fa39f21f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cf358ab04dbdc1e02cf2f4479b1d8fec29dc7884849cab2c09927d672777c18a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cf358ab04dbdc1e02cf2f4479b1d8fec29dc7884849cab2c09927d672777c18a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cf358ab04dbdc1e02cf2f4479b1d8fec29dc7884849cab2c09927d672777c18a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-182000",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-182000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-182000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-182000",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-182000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "de26b70de179023ab92799de40ef1eeb652a8d446f418d9a014b2020f75ff7b5",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62637"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62638"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62639"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62635"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62636"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/de26b70de179",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-182000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "617ab90eb0df",
	                        "old-k8s-version-182000"
	                    ],
	                    "NetworkID": "56bfdf73bec9b0196848fd6c701661b6f09d89a5213236097da597daf246c910",
	                    "EndpointID": "f469f3373bcad5db987b8fe4d4eca778cf1ffe79ecee6ea42cb7f1d2530a653e",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-182000 -n old-k8s-version-182000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-182000 -n old-k8s-version-182000: exit status 6 (409.280935ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0128 11:38:29.375668   44513 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-182000" does not appear in /Users/jenkins/minikube-integration/15565-24808/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-182000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (89.71s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (496.37s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p old-k8s-version-182000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0
E0128 11:38:33.538382   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/skaffold-054000/client.crt: no such file or directory
E0128 11:38:37.851536   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/calico-732000/client.crt: no such file or directory
E0128 11:38:37.857079   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/calico-732000/client.crt: no such file or directory
E0128 11:38:37.869229   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/calico-732000/client.crt: no such file or directory
E0128 11:38:37.889556   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/calico-732000/client.crt: no such file or directory
E0128 11:38:37.930291   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/calico-732000/client.crt: no such file or directory
E0128 11:38:38.012521   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/calico-732000/client.crt: no such file or directory
E0128 11:38:38.173597   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/calico-732000/client.crt: no such file or directory
E0128 11:38:38.495329   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/calico-732000/client.crt: no such file or directory
E0128 11:38:39.135475   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/calico-732000/client.crt: no such file or directory
E0128 11:38:40.416130   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/calico-732000/client.crt: no such file or directory
E0128 11:38:42.976784   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/calico-732000/client.crt: no such file or directory
E0128 11:38:46.189089   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/false-732000/client.crt: no such file or directory
E0128 11:38:46.619032   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/kindnet-732000/client.crt: no such file or directory
E0128 11:38:48.098842   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/calico-732000/client.crt: no such file or directory
E0128 11:38:58.339848   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/calico-732000/client.crt: no such file or directory
E0128 11:39:14.305083   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/kindnet-732000/client.crt: no such file or directory
E0128 11:39:18.821226   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/calico-732000/client.crt: no such file or directory
E0128 11:39:23.658004   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/bridge-732000/client.crt: no such file or directory
E0128 11:39:53.122966   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/kubenet-732000/client.crt: no such file or directory
E0128 11:39:59.781566   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/calico-732000/client.crt: no such file or directory
E0128 11:40:01.605065   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/flannel-732000/client.crt: no such file or directory
E0128 11:40:06.810191   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/auto-732000/client.crt: no such file or directory
E0128 11:40:29.289651   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/flannel-732000/client.crt: no such file or directory
E0128 11:40:32.965659   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/enable-default-cni-732000/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p old-k8s-version-182000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0: exit status 109 (8m11.823334585s)

                                                
                                                
-- stdout --
	* [old-k8s-version-182000] minikube v1.29.0-1674856271-15565 on Darwin 13.2
	  - MINIKUBE_LOCATION=15565
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15565-24808/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-24808/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.26.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.26.1
	* Using the docker driver based on existing profile
	* Starting control plane node old-k8s-version-182000 in cluster old-k8s-version-182000
	* Pulling base image ...
	* Restarting existing docker container for "old-k8s-version-182000" ...
	* Preparing Kubernetes v1.16.0 on Docker 20.10.23 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0128 11:38:31.422456   44543 out.go:296] Setting OutFile to fd 1 ...
	I0128 11:38:31.422694   44543 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0128 11:38:31.422700   44543 out.go:309] Setting ErrFile to fd 2...
	I0128 11:38:31.422703   44543 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0128 11:38:31.422830   44543 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15565-24808/.minikube/bin
	I0128 11:38:31.423329   44543 out.go:303] Setting JSON to false
	I0128 11:38:31.441651   44543 start.go:125] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":9486,"bootTime":1674925225,"procs":388,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0128 11:38:31.441741   44543 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0128 11:38:31.463830   44543 out.go:177] * [old-k8s-version-182000] minikube v1.29.0-1674856271-15565 on Darwin 13.2
	I0128 11:38:31.505709   44543 notify.go:220] Checking for updates...
	I0128 11:38:31.505719   44543 out.go:177]   - MINIKUBE_LOCATION=15565
	I0128 11:38:31.527775   44543 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15565-24808/kubeconfig
	I0128 11:38:31.549359   44543 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0128 11:38:31.570676   44543 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0128 11:38:31.591835   44543 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-24808/.minikube
	I0128 11:38:31.613731   44543 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0128 11:38:31.635630   44543 config.go:180] Loaded profile config "old-k8s-version-182000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0128 11:38:31.657408   44543 out.go:177] * Kubernetes 1.26.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.26.1
	I0128 11:38:31.678365   44543 driver.go:365] Setting default libvirt URI to qemu:///system
	I0128 11:38:31.739480   44543 docker.go:141] docker version: linux-20.10.22:Docker Desktop 4.16.2 (95914)
	I0128 11:38:31.739611   44543 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0128 11:38:31.880776   44543 info.go:266] docker info: {ID:O4T2:OPZV:SRVN:GAA5:IEOW:TFNR:UPTT:2CKV:GY3E:3NIN:VS7C:3BCB Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:63 OomKillDisable:false NGoroutines:57 SystemTime:2023-01-28 19:38:31.789795206 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0128 11:38:31.923533   44543 out.go:177] * Using the docker driver based on existing profile
	I0128 11:38:31.944620   44543 start.go:296] selected driver: docker
	I0128 11:38:31.944644   44543 start.go:857] validating driver "docker" against &{Name:old-k8s-version-182000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-182000 Namespace:default APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0128 11:38:31.944759   44543 start.go:868] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0128 11:38:31.948633   44543 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0128 11:38:32.091149   44543 info.go:266] docker info: {ID:O4T2:OPZV:SRVN:GAA5:IEOW:TFNR:UPTT:2CKV:GY3E:3NIN:VS7C:3BCB Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:63 OomKillDisable:false NGoroutines:57 SystemTime:2023-01-28 19:38:31.999699591 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0128 11:38:32.091334   44543 start_flags.go:917] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0128 11:38:32.091352   44543 cni.go:84] Creating CNI manager for ""
	I0128 11:38:32.091364   44543 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0128 11:38:32.091376   44543 start_flags.go:319] config:
	{Name:old-k8s-version-182000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-182000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocke
t: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0128 11:38:32.113265   44543 out.go:177] * Starting control plane node old-k8s-version-182000 in cluster old-k8s-version-182000
	I0128 11:38:32.135904   44543 cache.go:120] Beginning downloading kic base image for docker with docker
	I0128 11:38:32.156892   44543 out.go:177] * Pulling base image ...
	I0128 11:38:32.177878   44543 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0128 11:38:32.177912   44543 image.go:77] Checking for gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 in local docker daemon
	I0128 11:38:32.177970   44543 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15565-24808/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0128 11:38:32.177986   44543 cache.go:57] Caching tarball of preloaded images
	I0128 11:38:32.178215   44543 preload.go:174] Found /Users/jenkins/minikube-integration/15565-24808/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0128 11:38:32.178234   44543 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0128 11:38:32.179251   44543 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/old-k8s-version-182000/config.json ...
	I0128 11:38:32.236323   44543 image.go:81] Found gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 in local docker daemon, skipping pull
	I0128 11:38:32.236341   44543 cache.go:143] gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 exists in daemon, skipping load
	I0128 11:38:32.236359   44543 cache.go:193] Successfully downloaded all kic artifacts
	I0128 11:38:32.236411   44543 start.go:364] acquiring machines lock for old-k8s-version-182000: {Name:mk4015ba4a18ecf0d87a4f26a0f8283e87452f7f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0128 11:38:32.236525   44543 start.go:368] acquired machines lock for "old-k8s-version-182000" in 90.169µs
	I0128 11:38:32.236555   44543 start.go:96] Skipping create...Using existing machine configuration
	I0128 11:38:32.236565   44543 fix.go:55] fixHost starting: 
	I0128 11:38:32.236806   44543 cli_runner.go:164] Run: docker container inspect old-k8s-version-182000 --format={{.State.Status}}
	I0128 11:38:32.293608   44543 fix.go:103] recreateIfNeeded on old-k8s-version-182000: state=Stopped err=<nil>
	W0128 11:38:32.293636   44543 fix.go:129] unexpected machine state, will restart: <nil>
	I0128 11:38:32.315559   44543 out.go:177] * Restarting existing docker container for "old-k8s-version-182000" ...
	I0128 11:38:32.337233   44543 cli_runner.go:164] Run: docker start old-k8s-version-182000
	I0128 11:38:32.669539   44543 cli_runner.go:164] Run: docker container inspect old-k8s-version-182000 --format={{.State.Status}}
	I0128 11:38:32.729922   44543 kic.go:426] container "old-k8s-version-182000" state is running.
	I0128 11:38:32.730517   44543 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-182000
	I0128 11:38:32.793947   44543 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/old-k8s-version-182000/config.json ...
	I0128 11:38:32.794423   44543 machine.go:88] provisioning docker machine ...
	I0128 11:38:32.794453   44543 ubuntu.go:169] provisioning hostname "old-k8s-version-182000"
	I0128 11:38:32.794574   44543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-182000
	I0128 11:38:32.861497   44543 main.go:141] libmachine: Using SSH client type: native
	I0128 11:38:32.861719   44543 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 62979 <nil> <nil>}
	I0128 11:38:32.861733   44543 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-182000 && echo "old-k8s-version-182000" | sudo tee /etc/hostname
	I0128 11:38:33.012790   44543 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-182000
	
	I0128 11:38:33.012905   44543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-182000
	I0128 11:38:33.073579   44543 main.go:141] libmachine: Using SSH client type: native
	I0128 11:38:33.073738   44543 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 62979 <nil> <nil>}
	I0128 11:38:33.073750   44543 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-182000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-182000/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-182000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0128 11:38:33.210384   44543 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0128 11:38:33.210404   44543 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15565-24808/.minikube CaCertPath:/Users/jenkins/minikube-integration/15565-24808/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15565-24808/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15565-24808/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15565-24808/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15565-24808/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15565-24808/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15565-24808/.minikube}
	I0128 11:38:33.210425   44543 ubuntu.go:177] setting up certificates
	I0128 11:38:33.210439   44543 provision.go:83] configureAuth start
	I0128 11:38:33.210540   44543 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-182000
	I0128 11:38:33.267114   44543 provision.go:138] copyHostCerts
	I0128 11:38:33.267211   44543 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-24808/.minikube/ca.pem, removing ...
	I0128 11:38:33.267220   44543 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-24808/.minikube/ca.pem
	I0128 11:38:33.267345   44543 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-24808/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15565-24808/.minikube/ca.pem (1082 bytes)
	I0128 11:38:33.267543   44543 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-24808/.minikube/cert.pem, removing ...
	I0128 11:38:33.267549   44543 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-24808/.minikube/cert.pem
	I0128 11:38:33.267614   44543 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-24808/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15565-24808/.minikube/cert.pem (1123 bytes)
	I0128 11:38:33.267756   44543 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-24808/.minikube/key.pem, removing ...
	I0128 11:38:33.267761   44543 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-24808/.minikube/key.pem
	I0128 11:38:33.267825   44543 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-24808/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15565-24808/.minikube/key.pem (1675 bytes)
	I0128 11:38:33.267935   44543 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15565-24808/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15565-24808/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15565-24808/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-182000 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube old-k8s-version-182000]
	I0128 11:38:33.346041   44543 provision.go:172] copyRemoteCerts
	I0128 11:38:33.346101   44543 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0128 11:38:33.346153   44543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-182000
	I0128 11:38:33.404178   44543 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62979 SSHKeyPath:/Users/jenkins/minikube-integration/15565-24808/.minikube/machines/old-k8s-version-182000/id_rsa Username:docker}
	I0128 11:38:33.498551   44543 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-24808/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0128 11:38:33.516255   44543 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-24808/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0128 11:38:33.533692   44543 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-24808/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0128 11:38:33.551093   44543 provision.go:86] duration metric: configureAuth took 340.640293ms
	I0128 11:38:33.551106   44543 ubuntu.go:193] setting minikube options for container-runtime
	I0128 11:38:33.551253   44543 config.go:180] Loaded profile config "old-k8s-version-182000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0128 11:38:33.551313   44543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-182000
	I0128 11:38:33.610369   44543 main.go:141] libmachine: Using SSH client type: native
	I0128 11:38:33.610524   44543 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 62979 <nil> <nil>}
	I0128 11:38:33.610534   44543 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0128 11:38:33.742998   44543 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0128 11:38:33.743023   44543 ubuntu.go:71] root file system type: overlay
	I0128 11:38:33.743229   44543 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0128 11:38:33.743321   44543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-182000
	I0128 11:38:33.801686   44543 main.go:141] libmachine: Using SSH client type: native
	I0128 11:38:33.801858   44543 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 62979 <nil> <nil>}
	I0128 11:38:33.801905   44543 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0128 11:38:33.943012   44543 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0128 11:38:33.943120   44543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-182000
	I0128 11:38:34.000245   44543 main.go:141] libmachine: Using SSH client type: native
	I0128 11:38:34.000411   44543 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 62979 <nil> <nil>}
	I0128 11:38:34.000428   44543 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0128 11:38:34.136460   44543 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0128 11:38:34.136474   44543 machine.go:91] provisioned docker machine in 1.342039866s
	I0128 11:38:34.136481   44543 start.go:300] post-start starting for "old-k8s-version-182000" (driver="docker")
	I0128 11:38:34.136489   44543 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0128 11:38:34.136563   44543 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0128 11:38:34.136613   44543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-182000
	I0128 11:38:34.194251   44543 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62979 SSHKeyPath:/Users/jenkins/minikube-integration/15565-24808/.minikube/machines/old-k8s-version-182000/id_rsa Username:docker}
	I0128 11:38:34.288522   44543 ssh_runner.go:195] Run: cat /etc/os-release
	I0128 11:38:34.292234   44543 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0128 11:38:34.292253   44543 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0128 11:38:34.292261   44543 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0128 11:38:34.292265   44543 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0128 11:38:34.292273   44543 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15565-24808/.minikube/addons for local assets ...
	I0128 11:38:34.292369   44543 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15565-24808/.minikube/files for local assets ...
	I0128 11:38:34.292543   44543 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15565-24808/.minikube/files/etc/ssl/certs/259822.pem -> 259822.pem in /etc/ssl/certs
	I0128 11:38:34.292748   44543 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0128 11:38:34.300166   44543 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-24808/.minikube/files/etc/ssl/certs/259822.pem --> /etc/ssl/certs/259822.pem (1708 bytes)
	I0128 11:38:34.317410   44543 start.go:303] post-start completed in 180.920409ms
	I0128 11:38:34.317483   44543 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0128 11:38:34.317542   44543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-182000
	I0128 11:38:34.375042   44543 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62979 SSHKeyPath:/Users/jenkins/minikube-integration/15565-24808/.minikube/machines/old-k8s-version-182000/id_rsa Username:docker}
	I0128 11:38:34.465475   44543 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0128 11:38:34.470054   44543 fix.go:57] fixHost completed within 2.233477636s
	I0128 11:38:34.470070   44543 start.go:83] releasing machines lock for "old-k8s-version-182000", held for 2.233526266s
	I0128 11:38:34.470171   44543 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-182000
	I0128 11:38:34.528092   44543 ssh_runner.go:195] Run: cat /version.json
	I0128 11:38:34.528092   44543 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0128 11:38:34.528177   44543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-182000
	I0128 11:38:34.528208   44543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-182000
	I0128 11:38:34.587873   44543 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62979 SSHKeyPath:/Users/jenkins/minikube-integration/15565-24808/.minikube/machines/old-k8s-version-182000/id_rsa Username:docker}
	I0128 11:38:34.588010   44543 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62979 SSHKeyPath:/Users/jenkins/minikube-integration/15565-24808/.minikube/machines/old-k8s-version-182000/id_rsa Username:docker}
	W0128 11:38:34.871596   44543 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.29.0 -> Actual minikube version: v1.29.0-1674856271-15565
	! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.29.0 -> Actual minikube version: v1.29.0-1674856271-15565
	I0128 11:38:34.871674   44543 ssh_runner.go:195] Run: systemctl --version
	I0128 11:38:34.876860   44543 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0128 11:38:34.881447   44543 cni.go:208] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0128 11:38:34.881503   44543 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0128 11:38:34.889285   44543 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0128 11:38:34.896872   44543 cni.go:304] no active bridge cni configs found in "/etc/cni/net.d" - nothing to configure
	I0128 11:38:34.896899   44543 start.go:483] detecting cgroup driver to use...
	I0128 11:38:34.896911   44543 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0128 11:38:34.897004   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0128 11:38:34.910678   44543 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "k8s.gcr.io/pause:3.1"|' /etc/containerd/config.toml"
	I0128 11:38:34.920097   44543 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0128 11:38:34.929189   44543 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0128 11:38:34.929267   44543 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0128 11:38:34.938161   44543 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0128 11:38:34.946881   44543 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0128 11:38:34.955654   44543 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0128 11:38:34.964479   44543 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0128 11:38:34.972351   44543 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0128 11:38:34.981718   44543 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0128 11:38:34.989713   44543 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0128 11:38:34.996988   44543 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0128 11:38:35.068224   44543 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0128 11:38:35.142523   44543 start.go:483] detecting cgroup driver to use...
	I0128 11:38:35.142541   44543 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0128 11:38:35.142620   44543 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0128 11:38:35.153182   44543 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0128 11:38:35.153253   44543 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0128 11:38:35.163836   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0128 11:38:35.178720   44543 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0128 11:38:35.259003   44543 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0128 11:38:35.347015   44543 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0128 11:38:35.347044   44543 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0128 11:38:35.361414   44543 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0128 11:38:35.460252   44543 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0128 11:38:35.673339   44543 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0128 11:38:35.702506   44543 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0128 11:38:35.755309   44543 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 20.10.23 ...
	I0128 11:38:35.755430   44543 cli_runner.go:164] Run: docker exec -t old-k8s-version-182000 dig +short host.docker.internal
	I0128 11:38:35.866290   44543 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0128 11:38:35.866403   44543 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0128 11:38:35.871099   44543 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0128 11:38:35.881298   44543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-182000
	I0128 11:38:35.938960   44543 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0128 11:38:35.939029   44543 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0128 11:38:35.963744   44543 docker.go:630] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0128 11:38:35.963760   44543 docker.go:560] Images already preloaded, skipping extraction
	I0128 11:38:35.963839   44543 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0128 11:38:35.987743   44543 docker.go:630] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0128 11:38:35.987766   44543 cache_images.go:84] Images are preloaded, skipping loading
	I0128 11:38:35.987869   44543 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0128 11:38:36.060593   44543 cni.go:84] Creating CNI manager for ""
	I0128 11:38:36.060611   44543 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0128 11:38:36.060626   44543 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0128 11:38:36.060642   44543 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-182000 NodeName:old-k8s-version-182000 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0128 11:38:36.060772   44543 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "old-k8s-version-182000"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-182000
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.76.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0128 11:38:36.060856   44543 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=old-k8s-version-182000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-182000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0128 11:38:36.060920   44543 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0128 11:38:36.068987   44543 binaries.go:44] Found k8s binaries, skipping transfer
	I0128 11:38:36.069046   44543 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0128 11:38:36.076667   44543 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (348 bytes)
	I0128 11:38:36.090186   44543 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0128 11:38:36.103441   44543 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2174 bytes)
	I0128 11:38:36.116877   44543 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0128 11:38:36.120988   44543 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0128 11:38:36.131581   44543 certs.go:56] Setting up /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/old-k8s-version-182000 for IP: 192.168.76.2
	I0128 11:38:36.131604   44543 certs.go:186] acquiring lock for shared ca certs: {Name:mk223e4eab41546e140aa3e3e480564c04fddab4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0128 11:38:36.131787   44543 certs.go:195] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15565-24808/.minikube/ca.key
	I0128 11:38:36.131860   44543 certs.go:195] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15565-24808/.minikube/proxy-client-ca.key
	I0128 11:38:36.131953   44543 certs.go:311] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/old-k8s-version-182000/client.key
	I0128 11:38:36.132028   44543 certs.go:311] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/old-k8s-version-182000/apiserver.key.31bdca25
	I0128 11:38:36.132108   44543 certs.go:311] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/old-k8s-version-182000/proxy-client.key
	I0128 11:38:36.132318   44543 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-24808/.minikube/certs/Users/jenkins/minikube-integration/15565-24808/.minikube/certs/25982.pem (1338 bytes)
	W0128 11:38:36.132358   44543 certs.go:397] ignoring /Users/jenkins/minikube-integration/15565-24808/.minikube/certs/Users/jenkins/minikube-integration/15565-24808/.minikube/certs/25982_empty.pem, impossibly tiny 0 bytes
	I0128 11:38:36.132369   44543 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-24808/.minikube/certs/Users/jenkins/minikube-integration/15565-24808/.minikube/certs/ca-key.pem (1675 bytes)
	I0128 11:38:36.132403   44543 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-24808/.minikube/certs/Users/jenkins/minikube-integration/15565-24808/.minikube/certs/ca.pem (1082 bytes)
	I0128 11:38:36.132442   44543 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-24808/.minikube/certs/Users/jenkins/minikube-integration/15565-24808/.minikube/certs/cert.pem (1123 bytes)
	I0128 11:38:36.132471   44543 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-24808/.minikube/certs/Users/jenkins/minikube-integration/15565-24808/.minikube/certs/key.pem (1675 bytes)
	I0128 11:38:36.132543   44543 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-24808/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15565-24808/.minikube/files/etc/ssl/certs/259822.pem (1708 bytes)
	I0128 11:38:36.133101   44543 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/old-k8s-version-182000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0128 11:38:36.150864   44543 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/old-k8s-version-182000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0128 11:38:36.168495   44543 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/old-k8s-version-182000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0128 11:38:36.186133   44543 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/old-k8s-version-182000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0128 11:38:36.203690   44543 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-24808/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0128 11:38:36.221313   44543 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-24808/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0128 11:38:36.238771   44543 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-24808/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0128 11:38:36.256875   44543 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-24808/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0128 11:38:36.274347   44543 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-24808/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0128 11:38:36.292040   44543 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-24808/.minikube/certs/25982.pem --> /usr/share/ca-certificates/25982.pem (1338 bytes)
	I0128 11:38:36.309410   44543 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-24808/.minikube/files/etc/ssl/certs/259822.pem --> /usr/share/ca-certificates/259822.pem (1708 bytes)
	I0128 11:38:36.327054   44543 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (772 bytes)
	I0128 11:38:36.340446   44543 ssh_runner.go:195] Run: openssl version
	I0128 11:38:36.345907   44543 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0128 11:38:36.354071   44543 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0128 11:38:36.358227   44543 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Jan 28 18:40 /usr/share/ca-certificates/minikubeCA.pem
	I0128 11:38:36.358283   44543 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0128 11:38:36.363662   44543 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0128 11:38:36.371325   44543 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/25982.pem && ln -fs /usr/share/ca-certificates/25982.pem /etc/ssl/certs/25982.pem"
	I0128 11:38:36.379702   44543 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/25982.pem
	I0128 11:38:36.383611   44543 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Jan 28 18:44 /usr/share/ca-certificates/25982.pem
	I0128 11:38:36.383656   44543 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/25982.pem
	I0128 11:38:36.389131   44543 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/25982.pem /etc/ssl/certs/51391683.0"
	I0128 11:38:36.396706   44543 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/259822.pem && ln -fs /usr/share/ca-certificates/259822.pem /etc/ssl/certs/259822.pem"
	I0128 11:38:36.404820   44543 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/259822.pem
	I0128 11:38:36.408898   44543 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Jan 28 18:44 /usr/share/ca-certificates/259822.pem
	I0128 11:38:36.408940   44543 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/259822.pem
	I0128 11:38:36.414565   44543 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/259822.pem /etc/ssl/certs/3ec20f2e.0"
	I0128 11:38:36.422400   44543 kubeadm.go:401] StartCluster: {Name:old-k8s-version-182000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-182000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs
:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0128 11:38:36.422505   44543 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0128 11:38:36.445371   44543 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0128 11:38:36.453462   44543 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I0128 11:38:36.453476   44543 kubeadm.go:633] restartCluster start
	I0128 11:38:36.453528   44543 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0128 11:38:36.460603   44543 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:38:36.460677   44543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-182000
	I0128 11:38:36.520462   44543 kubeconfig.go:135] verify returned: extract IP: "old-k8s-version-182000" does not appear in /Users/jenkins/minikube-integration/15565-24808/kubeconfig
	I0128 11:38:36.520625   44543 kubeconfig.go:146] "old-k8s-version-182000" context is missing from /Users/jenkins/minikube-integration/15565-24808/kubeconfig - will repair!
	I0128 11:38:36.520955   44543 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-24808/kubeconfig: {Name:mkd8086baee7daec2b28ba7939ebfa1d8419f5f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0128 11:38:36.522183   44543 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0128 11:38:36.530428   44543 api_server.go:165] Checking apiserver status ...
	I0128 11:38:36.530483   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:38:36.539038   44543 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:38:37.039128   44543 api_server.go:165] Checking apiserver status ...
	I0128 11:38:37.039247   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:38:37.050293   44543 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:38:37.539742   44543 api_server.go:165] Checking apiserver status ...
	I0128 11:38:37.539956   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:38:37.551239   44543 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:38:38.040258   44543 api_server.go:165] Checking apiserver status ...
	I0128 11:38:38.040383   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:38:38.051418   44543 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:38:38.539168   44543 api_server.go:165] Checking apiserver status ...
	I0128 11:38:38.539271   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:38:38.549902   44543 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:38:39.039960   44543 api_server.go:165] Checking apiserver status ...
	I0128 11:38:39.040068   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:38:39.051031   44543 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:38:39.540216   44543 api_server.go:165] Checking apiserver status ...
	I0128 11:38:39.540352   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:38:39.551302   44543 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:38:40.039211   44543 api_server.go:165] Checking apiserver status ...
	I0128 11:38:40.039324   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:38:40.049772   44543 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:38:40.540622   44543 api_server.go:165] Checking apiserver status ...
	I0128 11:38:40.540853   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:38:40.551880   44543 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:38:41.039152   44543 api_server.go:165] Checking apiserver status ...
	I0128 11:38:41.039313   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:38:41.050324   44543 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:38:41.541176   44543 api_server.go:165] Checking apiserver status ...
	I0128 11:38:41.541417   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:38:41.552660   44543 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:38:42.040440   44543 api_server.go:165] Checking apiserver status ...
	I0128 11:38:42.040572   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:38:42.051510   44543 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:38:42.540300   44543 api_server.go:165] Checking apiserver status ...
	I0128 11:38:42.540544   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:38:42.551677   44543 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:38:43.039853   44543 api_server.go:165] Checking apiserver status ...
	I0128 11:38:43.039938   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:38:43.049850   44543 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:38:43.539794   44543 api_server.go:165] Checking apiserver status ...
	I0128 11:38:43.539970   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:38:43.550964   44543 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:38:44.039323   44543 api_server.go:165] Checking apiserver status ...
	I0128 11:38:44.039511   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:38:44.050314   44543 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:38:44.541198   44543 api_server.go:165] Checking apiserver status ...
	I0128 11:38:44.541436   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:38:44.552664   44543 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:38:45.040023   44543 api_server.go:165] Checking apiserver status ...
	I0128 11:38:45.040194   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:38:45.051307   44543 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:38:45.540545   44543 api_server.go:165] Checking apiserver status ...
	I0128 11:38:45.540635   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:38:45.550021   44543 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:38:46.039575   44543 api_server.go:165] Checking apiserver status ...
	I0128 11:38:46.039791   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:38:46.051092   44543 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:38:46.540420   44543 api_server.go:165] Checking apiserver status ...
	I0128 11:38:46.540647   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:38:46.551656   44543 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:38:46.551667   44543 api_server.go:165] Checking apiserver status ...
	I0128 11:38:46.551724   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:38:46.560533   44543 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:38:46.560547   44543 kubeadm.go:608] needs reconfigure: apiserver error: timed out waiting for the condition
	I0128 11:38:46.560555   44543 kubeadm.go:1120] stopping kube-system containers ...
	I0128 11:38:46.560621   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0128 11:38:46.583443   44543 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0128 11:38:46.594028   44543 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0128 11:38:46.601860   44543 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5691 Jan 28 19:35 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5731 Jan 28 19:35 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5795 Jan 28 19:35 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5679 Jan 28 19:35 /etc/kubernetes/scheduler.conf
	
	I0128 11:38:46.601918   44543 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0128 11:38:46.609568   44543 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0128 11:38:46.617181   44543 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0128 11:38:46.625003   44543 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0128 11:38:46.633188   44543 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0128 11:38:46.641128   44543 kubeadm.go:710] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0128 11:38:46.641141   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0128 11:38:46.695077   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0128 11:38:47.045382   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0128 11:38:47.253468   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0128 11:38:47.311029   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0128 11:38:47.371992   44543 api_server.go:51] waiting for apiserver process to appear ...
	I0128 11:38:47.372055   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:38:47.882338   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:38:48.383178   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:38:48.881653   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:38:49.383141   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:38:49.881157   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:38:50.382188   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:38:50.881255   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:38:51.382304   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:38:51.881565   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:38:52.381813   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:38:52.881272   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:38:53.381768   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:38:53.882216   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:38:54.382396   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:38:54.882559   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:38:55.381386   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:38:55.882088   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:38:56.383262   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:38:56.881676   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:38:57.381176   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:38:57.882304   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:38:58.383141   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:38:58.883307   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:38:59.383258   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:38:59.881160   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:39:00.381372   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:39:00.883242   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:39:01.381182   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:39:01.883243   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:39:02.381590   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:39:02.883318   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:39:03.382231   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:39:03.882804   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:39:04.381330   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:39:04.881398   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:39:05.381430   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:39:05.881702   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:39:06.382597   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:39:06.881564   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:39:07.381406   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:39:07.881224   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:39:08.382031   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:39:08.881290   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:39:09.381434   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:39:09.881859   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:39:10.381340   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:39:10.883250   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:39:11.382182   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:39:11.881368   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:39:12.381687   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:39:12.881813   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:39:13.381299   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:39:13.882016   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:39:14.381524   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:39:14.882508   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:39:15.381464   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:39:15.883290   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:39:16.381776   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:39:16.881953   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:39:17.383363   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:39:17.881655   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:39:18.381305   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:39:18.881725   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:39:19.383288   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:39:19.882164   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:39:20.381263   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:39:20.883108   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:39:21.381448   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:39:21.881324   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:39:22.381917   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:39:22.881239   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:39:23.381317   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:39:23.882574   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:39:24.381490   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:39:24.881461   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:39:25.381224   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:39:25.881394   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:39:26.381304   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:39:26.881272   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:39:27.382580   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:39:27.881306   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:39:28.381227   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:39:28.882165   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:39:29.381348   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:39:29.882128   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:39:30.381917   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:39:30.883317   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:39:31.381360   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:39:31.881614   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:39:32.381887   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:39:32.881691   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:39:33.382449   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:39:33.882963   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:39:34.381427   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:39:34.881712   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:39:35.381310   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:39:35.882925   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:39:36.381570   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:39:36.881567   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:39:37.381321   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:39:37.881953   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:39:38.383490   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:39:38.881390   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:39:39.382009   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:39:39.881511   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:39:40.381688   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:39:40.883336   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:39:41.382307   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:39:41.881500   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:39:42.381298   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:39:42.881975   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:39:43.381667   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:39:43.882214   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:39:44.381675   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:39:44.883359   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:39:45.381468   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:39:45.882558   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:39:46.381453   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:39:46.883438   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:39:47.382495   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0128 11:39:47.408926   44543 logs.go:279] 0 containers: []
	W0128 11:39:47.408940   44543 logs.go:281] No container was found matching "kube-apiserver"
	I0128 11:39:47.409009   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0128 11:39:47.434334   44543 logs.go:279] 0 containers: []
	W0128 11:39:47.434350   44543 logs.go:281] No container was found matching "etcd"
	I0128 11:39:47.434428   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0128 11:39:47.458403   44543 logs.go:279] 0 containers: []
	W0128 11:39:47.458417   44543 logs.go:281] No container was found matching "coredns"
	I0128 11:39:47.458510   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0128 11:39:47.517305   44543 logs.go:279] 0 containers: []
	W0128 11:39:47.517320   44543 logs.go:281] No container was found matching "kube-scheduler"
	I0128 11:39:47.517392   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0128 11:39:47.540794   44543 logs.go:279] 0 containers: []
	W0128 11:39:47.540809   44543 logs.go:281] No container was found matching "kube-proxy"
	I0128 11:39:47.540881   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0128 11:39:47.564945   44543 logs.go:279] 0 containers: []
	W0128 11:39:47.564958   44543 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0128 11:39:47.565024   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0128 11:39:47.588267   44543 logs.go:279] 0 containers: []
	W0128 11:39:47.588279   44543 logs.go:281] No container was found matching "storage-provisioner"
	I0128 11:39:47.588352   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0128 11:39:47.611377   44543 logs.go:279] 0 containers: []
	W0128 11:39:47.611391   44543 logs.go:281] No container was found matching "kube-controller-manager"
	I0128 11:39:47.611398   44543 logs.go:124] Gathering logs for container status ...
	I0128 11:39:47.611404   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0128 11:39:49.661285   44543 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.049863022s)
	I0128 11:39:49.661440   44543 logs.go:124] Gathering logs for kubelet ...
	I0128 11:39:49.661447   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0128 11:39:49.699810   44543 logs.go:124] Gathering logs for dmesg ...
	I0128 11:39:49.699828   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0128 11:39:49.713273   44543 logs.go:124] Gathering logs for describe nodes ...
	I0128 11:39:49.713288   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0128 11:39:49.770865   44543 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0128 11:39:49.770882   44543 logs.go:124] Gathering logs for Docker ...
	I0128 11:39:49.770890   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0128 11:39:52.286999   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:39:52.382321   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0128 11:39:52.408321   44543 logs.go:279] 0 containers: []
	W0128 11:39:52.408335   44543 logs.go:281] No container was found matching "kube-apiserver"
	I0128 11:39:52.408403   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0128 11:39:52.431847   44543 logs.go:279] 0 containers: []
	W0128 11:39:52.431859   44543 logs.go:281] No container was found matching "etcd"
	I0128 11:39:52.431925   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0128 11:39:52.454930   44543 logs.go:279] 0 containers: []
	W0128 11:39:52.454944   44543 logs.go:281] No container was found matching "coredns"
	I0128 11:39:52.455010   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0128 11:39:52.478939   44543 logs.go:279] 0 containers: []
	W0128 11:39:52.478951   44543 logs.go:281] No container was found matching "kube-scheduler"
	I0128 11:39:52.479022   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0128 11:39:52.501652   44543 logs.go:279] 0 containers: []
	W0128 11:39:52.501666   44543 logs.go:281] No container was found matching "kube-proxy"
	I0128 11:39:52.501738   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0128 11:39:52.525789   44543 logs.go:279] 0 containers: []
	W0128 11:39:52.525804   44543 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0128 11:39:52.525875   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0128 11:39:52.548793   44543 logs.go:279] 0 containers: []
	W0128 11:39:52.548807   44543 logs.go:281] No container was found matching "storage-provisioner"
	I0128 11:39:52.548877   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0128 11:39:52.573101   44543 logs.go:279] 0 containers: []
	W0128 11:39:52.573113   44543 logs.go:281] No container was found matching "kube-controller-manager"
	I0128 11:39:52.573120   44543 logs.go:124] Gathering logs for kubelet ...
	I0128 11:39:52.573130   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0128 11:39:52.610280   44543 logs.go:124] Gathering logs for dmesg ...
	I0128 11:39:52.610295   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0128 11:39:52.622524   44543 logs.go:124] Gathering logs for describe nodes ...
	I0128 11:39:52.622537   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0128 11:39:52.679037   44543 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0128 11:39:52.679056   44543 logs.go:124] Gathering logs for Docker ...
	I0128 11:39:52.679063   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0128 11:39:52.695141   44543 logs.go:124] Gathering logs for container status ...
	I0128 11:39:52.695155   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0128 11:39:54.744364   44543 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.049191882s)
	I0128 11:39:57.244612   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:39:57.381960   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0128 11:39:57.406285   44543 logs.go:279] 0 containers: []
	W0128 11:39:57.406298   44543 logs.go:281] No container was found matching "kube-apiserver"
	I0128 11:39:57.406378   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0128 11:39:57.429865   44543 logs.go:279] 0 containers: []
	W0128 11:39:57.429878   44543 logs.go:281] No container was found matching "etcd"
	I0128 11:39:57.429954   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0128 11:39:57.452888   44543 logs.go:279] 0 containers: []
	W0128 11:39:57.452904   44543 logs.go:281] No container was found matching "coredns"
	I0128 11:39:57.452978   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0128 11:39:57.477122   44543 logs.go:279] 0 containers: []
	W0128 11:39:57.477139   44543 logs.go:281] No container was found matching "kube-scheduler"
	I0128 11:39:57.477224   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0128 11:39:57.499566   44543 logs.go:279] 0 containers: []
	W0128 11:39:57.499579   44543 logs.go:281] No container was found matching "kube-proxy"
	I0128 11:39:57.499644   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0128 11:39:57.523859   44543 logs.go:279] 0 containers: []
	W0128 11:39:57.523872   44543 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0128 11:39:57.523943   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0128 11:39:57.547648   44543 logs.go:279] 0 containers: []
	W0128 11:39:57.547677   44543 logs.go:281] No container was found matching "storage-provisioner"
	I0128 11:39:57.547747   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0128 11:39:57.570750   44543 logs.go:279] 0 containers: []
	W0128 11:39:57.570763   44543 logs.go:281] No container was found matching "kube-controller-manager"
	I0128 11:39:57.570770   44543 logs.go:124] Gathering logs for container status ...
	I0128 11:39:57.570777   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0128 11:39:59.620167   44543 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.049373044s)
	I0128 11:39:59.620300   44543 logs.go:124] Gathering logs for kubelet ...
	I0128 11:39:59.620310   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0128 11:39:59.658653   44543 logs.go:124] Gathering logs for dmesg ...
	I0128 11:39:59.658668   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0128 11:39:59.671222   44543 logs.go:124] Gathering logs for describe nodes ...
	I0128 11:39:59.671238   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0128 11:39:59.726927   44543 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0128 11:39:59.726943   44543 logs.go:124] Gathering logs for Docker ...
	I0128 11:39:59.726951   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0128 11:40:02.244711   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:40:02.381506   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0128 11:40:02.405530   44543 logs.go:279] 0 containers: []
	W0128 11:40:02.405543   44543 logs.go:281] No container was found matching "kube-apiserver"
	I0128 11:40:02.405612   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0128 11:40:02.429805   44543 logs.go:279] 0 containers: []
	W0128 11:40:02.429819   44543 logs.go:281] No container was found matching "etcd"
	I0128 11:40:02.429892   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0128 11:40:02.453588   44543 logs.go:279] 0 containers: []
	W0128 11:40:02.453603   44543 logs.go:281] No container was found matching "coredns"
	I0128 11:40:02.453692   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0128 11:40:02.511898   44543 logs.go:279] 0 containers: []
	W0128 11:40:02.511916   44543 logs.go:281] No container was found matching "kube-scheduler"
	I0128 11:40:02.511992   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0128 11:40:02.535517   44543 logs.go:279] 0 containers: []
	W0128 11:40:02.535529   44543 logs.go:281] No container was found matching "kube-proxy"
	I0128 11:40:02.535597   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0128 11:40:02.558938   44543 logs.go:279] 0 containers: []
	W0128 11:40:02.558951   44543 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0128 11:40:02.559019   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0128 11:40:02.581774   44543 logs.go:279] 0 containers: []
	W0128 11:40:02.581788   44543 logs.go:281] No container was found matching "storage-provisioner"
	I0128 11:40:02.581868   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0128 11:40:02.604831   44543 logs.go:279] 0 containers: []
	W0128 11:40:02.604843   44543 logs.go:281] No container was found matching "kube-controller-manager"
	I0128 11:40:02.604850   44543 logs.go:124] Gathering logs for kubelet ...
	I0128 11:40:02.604857   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0128 11:40:02.642487   44543 logs.go:124] Gathering logs for dmesg ...
	I0128 11:40:02.642503   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0128 11:40:02.655072   44543 logs.go:124] Gathering logs for describe nodes ...
	I0128 11:40:02.655088   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0128 11:40:02.710047   44543 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0128 11:40:02.710062   44543 logs.go:124] Gathering logs for Docker ...
	I0128 11:40:02.710068   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0128 11:40:02.725669   44543 logs.go:124] Gathering logs for container status ...
	I0128 11:40:02.725683   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0128 11:40:04.774417   44543 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.048717223s)
	I0128 11:40:07.274687   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:40:07.381389   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0128 11:40:07.406879   44543 logs.go:279] 0 containers: []
	W0128 11:40:07.406893   44543 logs.go:281] No container was found matching "kube-apiserver"
	I0128 11:40:07.406970   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0128 11:40:07.430543   44543 logs.go:279] 0 containers: []
	W0128 11:40:07.430559   44543 logs.go:281] No container was found matching "etcd"
	I0128 11:40:07.430631   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0128 11:40:07.453404   44543 logs.go:279] 0 containers: []
	W0128 11:40:07.453417   44543 logs.go:281] No container was found matching "coredns"
	I0128 11:40:07.453496   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0128 11:40:07.476672   44543 logs.go:279] 0 containers: []
	W0128 11:40:07.476685   44543 logs.go:281] No container was found matching "kube-scheduler"
	I0128 11:40:07.476757   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0128 11:40:07.500781   44543 logs.go:279] 0 containers: []
	W0128 11:40:07.500795   44543 logs.go:281] No container was found matching "kube-proxy"
	I0128 11:40:07.500866   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0128 11:40:07.524251   44543 logs.go:279] 0 containers: []
	W0128 11:40:07.524264   44543 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0128 11:40:07.524358   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0128 11:40:07.548668   44543 logs.go:279] 0 containers: []
	W0128 11:40:07.548681   44543 logs.go:281] No container was found matching "storage-provisioner"
	I0128 11:40:07.548748   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0128 11:40:07.572197   44543 logs.go:279] 0 containers: []
	W0128 11:40:07.572213   44543 logs.go:281] No container was found matching "kube-controller-manager"
	I0128 11:40:07.572221   44543 logs.go:124] Gathering logs for kubelet ...
	I0128 11:40:07.572230   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0128 11:40:07.610083   44543 logs.go:124] Gathering logs for dmesg ...
	I0128 11:40:07.610097   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0128 11:40:07.622819   44543 logs.go:124] Gathering logs for describe nodes ...
	I0128 11:40:07.622834   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0128 11:40:07.680431   44543 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0128 11:40:07.680443   44543 logs.go:124] Gathering logs for Docker ...
	I0128 11:40:07.680449   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0128 11:40:07.696361   44543 logs.go:124] Gathering logs for container status ...
	I0128 11:40:07.696380   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0128 11:40:09.746548   44543 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.050150202s)
	I0128 11:40:12.247204   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:40:12.383528   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0128 11:40:12.409036   44543 logs.go:279] 0 containers: []
	W0128 11:40:12.409049   44543 logs.go:281] No container was found matching "kube-apiserver"
	I0128 11:40:12.409126   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0128 11:40:12.432467   44543 logs.go:279] 0 containers: []
	W0128 11:40:12.432480   44543 logs.go:281] No container was found matching "etcd"
	I0128 11:40:12.432553   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0128 11:40:12.456018   44543 logs.go:279] 0 containers: []
	W0128 11:40:12.456033   44543 logs.go:281] No container was found matching "coredns"
	I0128 11:40:12.456102   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0128 11:40:12.479003   44543 logs.go:279] 0 containers: []
	W0128 11:40:12.479017   44543 logs.go:281] No container was found matching "kube-scheduler"
	I0128 11:40:12.479086   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0128 11:40:12.502964   44543 logs.go:279] 0 containers: []
	W0128 11:40:12.502977   44543 logs.go:281] No container was found matching "kube-proxy"
	I0128 11:40:12.503041   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0128 11:40:12.525785   44543 logs.go:279] 0 containers: []
	W0128 11:40:12.525798   44543 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0128 11:40:12.525869   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0128 11:40:12.550535   44543 logs.go:279] 0 containers: []
	W0128 11:40:12.550549   44543 logs.go:281] No container was found matching "storage-provisioner"
	I0128 11:40:12.550617   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0128 11:40:12.573931   44543 logs.go:279] 0 containers: []
	W0128 11:40:12.573944   44543 logs.go:281] No container was found matching "kube-controller-manager"
	I0128 11:40:12.573951   44543 logs.go:124] Gathering logs for kubelet ...
	I0128 11:40:12.573957   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0128 11:40:12.613758   44543 logs.go:124] Gathering logs for dmesg ...
	I0128 11:40:12.613773   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0128 11:40:12.626388   44543 logs.go:124] Gathering logs for describe nodes ...
	I0128 11:40:12.626405   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0128 11:40:12.685045   44543 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0128 11:40:12.685059   44543 logs.go:124] Gathering logs for Docker ...
	I0128 11:40:12.685065   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0128 11:40:12.700493   44543 logs.go:124] Gathering logs for container status ...
	I0128 11:40:12.700508   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0128 11:40:14.749555   44543 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.049028188s)
	I0128 11:40:17.249906   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:40:17.381729   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0128 11:40:17.408251   44543 logs.go:279] 0 containers: []
	W0128 11:40:17.408269   44543 logs.go:281] No container was found matching "kube-apiserver"
	I0128 11:40:17.408348   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0128 11:40:17.433642   44543 logs.go:279] 0 containers: []
	W0128 11:40:17.433657   44543 logs.go:281] No container was found matching "etcd"
	I0128 11:40:17.433741   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0128 11:40:17.457769   44543 logs.go:279] 0 containers: []
	W0128 11:40:17.457784   44543 logs.go:281] No container was found matching "coredns"
	I0128 11:40:17.457855   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0128 11:40:17.510537   44543 logs.go:279] 0 containers: []
	W0128 11:40:17.510555   44543 logs.go:281] No container was found matching "kube-scheduler"
	I0128 11:40:17.510615   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0128 11:40:17.534783   44543 logs.go:279] 0 containers: []
	W0128 11:40:17.534798   44543 logs.go:281] No container was found matching "kube-proxy"
	I0128 11:40:17.534870   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0128 11:40:17.557801   44543 logs.go:279] 0 containers: []
	W0128 11:40:17.557814   44543 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0128 11:40:17.557883   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0128 11:40:17.581131   44543 logs.go:279] 0 containers: []
	W0128 11:40:17.581145   44543 logs.go:281] No container was found matching "storage-provisioner"
	I0128 11:40:17.581213   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0128 11:40:17.605085   44543 logs.go:279] 0 containers: []
	W0128 11:40:17.605098   44543 logs.go:281] No container was found matching "kube-controller-manager"
	I0128 11:40:17.605106   44543 logs.go:124] Gathering logs for container status ...
	I0128 11:40:17.605112   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0128 11:40:19.655919   44543 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.050789896s)
	I0128 11:40:19.656026   44543 logs.go:124] Gathering logs for kubelet ...
	I0128 11:40:19.656033   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0128 11:40:19.694701   44543 logs.go:124] Gathering logs for dmesg ...
	I0128 11:40:19.694723   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0128 11:40:19.708096   44543 logs.go:124] Gathering logs for describe nodes ...
	I0128 11:40:19.708117   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0128 11:40:19.766007   44543 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0128 11:40:19.766024   44543 logs.go:124] Gathering logs for Docker ...
	I0128 11:40:19.766031   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0128 11:40:22.281761   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:40:22.381651   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0128 11:40:22.407871   44543 logs.go:279] 0 containers: []
	W0128 11:40:22.407885   44543 logs.go:281] No container was found matching "kube-apiserver"
	I0128 11:40:22.407952   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0128 11:40:22.431886   44543 logs.go:279] 0 containers: []
	W0128 11:40:22.431900   44543 logs.go:281] No container was found matching "etcd"
	I0128 11:40:22.431973   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0128 11:40:22.455308   44543 logs.go:279] 0 containers: []
	W0128 11:40:22.455329   44543 logs.go:281] No container was found matching "coredns"
	I0128 11:40:22.455408   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0128 11:40:22.478671   44543 logs.go:279] 0 containers: []
	W0128 11:40:22.478685   44543 logs.go:281] No container was found matching "kube-scheduler"
	I0128 11:40:22.478756   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0128 11:40:22.501864   44543 logs.go:279] 0 containers: []
	W0128 11:40:22.501878   44543 logs.go:281] No container was found matching "kube-proxy"
	I0128 11:40:22.501947   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0128 11:40:22.525784   44543 logs.go:279] 0 containers: []
	W0128 11:40:22.525798   44543 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0128 11:40:22.525869   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0128 11:40:22.550001   44543 logs.go:279] 0 containers: []
	W0128 11:40:22.550014   44543 logs.go:281] No container was found matching "storage-provisioner"
	I0128 11:40:22.550082   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0128 11:40:22.572791   44543 logs.go:279] 0 containers: []
	W0128 11:40:22.572805   44543 logs.go:281] No container was found matching "kube-controller-manager"
	I0128 11:40:22.572812   44543 logs.go:124] Gathering logs for dmesg ...
	I0128 11:40:22.572820   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0128 11:40:22.584796   44543 logs.go:124] Gathering logs for describe nodes ...
	I0128 11:40:22.584817   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0128 11:40:22.640534   44543 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0128 11:40:22.640545   44543 logs.go:124] Gathering logs for Docker ...
	I0128 11:40:22.640551   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0128 11:40:22.655944   44543 logs.go:124] Gathering logs for container status ...
	I0128 11:40:22.655959   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0128 11:40:24.706094   44543 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.05011854s)
	I0128 11:40:24.706212   44543 logs.go:124] Gathering logs for kubelet ...
	I0128 11:40:24.706219   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0128 11:40:27.246193   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:40:27.381334   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0128 11:40:27.405881   44543 logs.go:279] 0 containers: []
	W0128 11:40:27.405907   44543 logs.go:281] No container was found matching "kube-apiserver"
	I0128 11:40:27.406000   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0128 11:40:27.431062   44543 logs.go:279] 0 containers: []
	W0128 11:40:27.431077   44543 logs.go:281] No container was found matching "etcd"
	I0128 11:40:27.431150   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0128 11:40:27.455627   44543 logs.go:279] 0 containers: []
	W0128 11:40:27.455640   44543 logs.go:281] No container was found matching "coredns"
	I0128 11:40:27.455714   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0128 11:40:27.479092   44543 logs.go:279] 0 containers: []
	W0128 11:40:27.479108   44543 logs.go:281] No container was found matching "kube-scheduler"
	I0128 11:40:27.479178   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0128 11:40:27.504252   44543 logs.go:279] 0 containers: []
	W0128 11:40:27.504265   44543 logs.go:281] No container was found matching "kube-proxy"
	I0128 11:40:27.504350   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0128 11:40:27.528799   44543 logs.go:279] 0 containers: []
	W0128 11:40:27.528811   44543 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0128 11:40:27.528881   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0128 11:40:27.553784   44543 logs.go:279] 0 containers: []
	W0128 11:40:27.553797   44543 logs.go:281] No container was found matching "storage-provisioner"
	I0128 11:40:27.553867   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0128 11:40:27.579271   44543 logs.go:279] 0 containers: []
	W0128 11:40:27.579286   44543 logs.go:281] No container was found matching "kube-controller-manager"
	I0128 11:40:27.579295   44543 logs.go:124] Gathering logs for kubelet ...
	I0128 11:40:27.579302   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0128 11:40:27.631463   44543 logs.go:124] Gathering logs for dmesg ...
	I0128 11:40:27.631483   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0128 11:40:27.645547   44543 logs.go:124] Gathering logs for describe nodes ...
	I0128 11:40:27.645565   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0128 11:40:27.705889   44543 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0128 11:40:27.705904   44543 logs.go:124] Gathering logs for Docker ...
	I0128 11:40:27.705912   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0128 11:40:27.724126   44543 logs.go:124] Gathering logs for container status ...
	I0128 11:40:27.724140   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0128 11:40:29.773191   44543 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.04903288s)
	I0128 11:40:32.273539   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:40:32.381632   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0128 11:40:32.406678   44543 logs.go:279] 0 containers: []
	W0128 11:40:32.406692   44543 logs.go:281] No container was found matching "kube-apiserver"
	I0128 11:40:32.406766   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0128 11:40:32.442071   44543 logs.go:279] 0 containers: []
	W0128 11:40:32.442087   44543 logs.go:281] No container was found matching "etcd"
	I0128 11:40:32.442167   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0128 11:40:32.475566   44543 logs.go:279] 0 containers: []
	W0128 11:40:32.475585   44543 logs.go:281] No container was found matching "coredns"
	I0128 11:40:32.475657   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0128 11:40:32.510300   44543 logs.go:279] 0 containers: []
	W0128 11:40:32.510327   44543 logs.go:281] No container was found matching "kube-scheduler"
	I0128 11:40:32.510409   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0128 11:40:32.542032   44543 logs.go:279] 0 containers: []
	W0128 11:40:32.542060   44543 logs.go:281] No container was found matching "kube-proxy"
	I0128 11:40:32.542151   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0128 11:40:32.572903   44543 logs.go:279] 0 containers: []
	W0128 11:40:32.572918   44543 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0128 11:40:32.572986   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0128 11:40:32.599414   44543 logs.go:279] 0 containers: []
	W0128 11:40:32.599427   44543 logs.go:281] No container was found matching "storage-provisioner"
	I0128 11:40:32.599501   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0128 11:40:32.624765   44543 logs.go:279] 0 containers: []
	W0128 11:40:32.624784   44543 logs.go:281] No container was found matching "kube-controller-manager"
	I0128 11:40:32.624793   44543 logs.go:124] Gathering logs for container status ...
	I0128 11:40:32.624803   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0128 11:40:34.678619   44543 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.05379856s)
	I0128 11:40:34.678745   44543 logs.go:124] Gathering logs for kubelet ...
	I0128 11:40:34.678755   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0128 11:40:34.721083   44543 logs.go:124] Gathering logs for dmesg ...
	I0128 11:40:34.721102   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0128 11:40:34.735141   44543 logs.go:124] Gathering logs for describe nodes ...
	I0128 11:40:34.735156   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0128 11:40:34.796468   44543 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0128 11:40:34.796482   44543 logs.go:124] Gathering logs for Docker ...
	I0128 11:40:34.796495   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0128 11:40:37.313291   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:40:37.382388   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0128 11:40:37.411545   44543 logs.go:279] 0 containers: []
	W0128 11:40:37.411557   44543 logs.go:281] No container was found matching "kube-apiserver"
	I0128 11:40:37.411629   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0128 11:40:37.436144   44543 logs.go:279] 0 containers: []
	W0128 11:40:37.436159   44543 logs.go:281] No container was found matching "etcd"
	I0128 11:40:37.436228   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0128 11:40:37.459844   44543 logs.go:279] 0 containers: []
	W0128 11:40:37.459858   44543 logs.go:281] No container was found matching "coredns"
	I0128 11:40:37.459927   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0128 11:40:37.483024   44543 logs.go:279] 0 containers: []
	W0128 11:40:37.483034   44543 logs.go:281] No container was found matching "kube-scheduler"
	I0128 11:40:37.483091   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0128 11:40:37.507996   44543 logs.go:279] 0 containers: []
	W0128 11:40:37.508010   44543 logs.go:281] No container was found matching "kube-proxy"
	I0128 11:40:37.508092   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0128 11:40:37.533998   44543 logs.go:279] 0 containers: []
	W0128 11:40:37.534013   44543 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0128 11:40:37.534133   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0128 11:40:37.560359   44543 logs.go:279] 0 containers: []
	W0128 11:40:37.560374   44543 logs.go:281] No container was found matching "storage-provisioner"
	I0128 11:40:37.560450   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0128 11:40:37.584086   44543 logs.go:279] 0 containers: []
	W0128 11:40:37.584101   44543 logs.go:281] No container was found matching "kube-controller-manager"
	I0128 11:40:37.584111   44543 logs.go:124] Gathering logs for kubelet ...
	I0128 11:40:37.584118   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0128 11:40:37.625645   44543 logs.go:124] Gathering logs for dmesg ...
	I0128 11:40:37.625661   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0128 11:40:37.638601   44543 logs.go:124] Gathering logs for describe nodes ...
	I0128 11:40:37.638616   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0128 11:40:37.697356   44543 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0128 11:40:37.697376   44543 logs.go:124] Gathering logs for Docker ...
	I0128 11:40:37.697386   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0128 11:40:37.713865   44543 logs.go:124] Gathering logs for container status ...
	I0128 11:40:37.713880   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0128 11:40:39.766710   44543 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.052812662s)
	I0128 11:40:42.268935   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:40:42.382878   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0128 11:40:42.408441   44543 logs.go:279] 0 containers: []
	W0128 11:40:42.408454   44543 logs.go:281] No container was found matching "kube-apiserver"
	I0128 11:40:42.408525   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0128 11:40:42.431468   44543 logs.go:279] 0 containers: []
	W0128 11:40:42.431481   44543 logs.go:281] No container was found matching "etcd"
	I0128 11:40:42.431550   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0128 11:40:42.455156   44543 logs.go:279] 0 containers: []
	W0128 11:40:42.455170   44543 logs.go:281] No container was found matching "coredns"
	I0128 11:40:42.455244   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0128 11:40:42.479500   44543 logs.go:279] 0 containers: []
	W0128 11:40:42.479513   44543 logs.go:281] No container was found matching "kube-scheduler"
	I0128 11:40:42.479581   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0128 11:40:42.505380   44543 logs.go:279] 0 containers: []
	W0128 11:40:42.505396   44543 logs.go:281] No container was found matching "kube-proxy"
	I0128 11:40:42.505470   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0128 11:40:42.530711   44543 logs.go:279] 0 containers: []
	W0128 11:40:42.530725   44543 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0128 11:40:42.530794   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0128 11:40:42.554735   44543 logs.go:279] 0 containers: []
	W0128 11:40:42.554748   44543 logs.go:281] No container was found matching "storage-provisioner"
	I0128 11:40:42.554821   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0128 11:40:42.578913   44543 logs.go:279] 0 containers: []
	W0128 11:40:42.578927   44543 logs.go:281] No container was found matching "kube-controller-manager"
	I0128 11:40:42.578934   44543 logs.go:124] Gathering logs for describe nodes ...
	I0128 11:40:42.578941   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0128 11:40:42.634676   44543 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0128 11:40:42.634687   44543 logs.go:124] Gathering logs for Docker ...
	I0128 11:40:42.634694   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0128 11:40:42.650067   44543 logs.go:124] Gathering logs for container status ...
	I0128 11:40:42.650080   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0128 11:40:44.700969   44543 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.050871489s)
	I0128 11:40:44.701075   44543 logs.go:124] Gathering logs for kubelet ...
	I0128 11:40:44.701082   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0128 11:40:44.737874   44543 logs.go:124] Gathering logs for dmesg ...
	I0128 11:40:44.737887   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0128 11:40:47.251295   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:40:47.382853   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0128 11:40:47.407892   44543 logs.go:279] 0 containers: []
	W0128 11:40:47.407905   44543 logs.go:281] No container was found matching "kube-apiserver"
	I0128 11:40:47.407971   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0128 11:40:47.433965   44543 logs.go:279] 0 containers: []
	W0128 11:40:47.433978   44543 logs.go:281] No container was found matching "etcd"
	I0128 11:40:47.434048   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0128 11:40:47.458432   44543 logs.go:279] 0 containers: []
	W0128 11:40:47.458445   44543 logs.go:281] No container was found matching "coredns"
	I0128 11:40:47.458520   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0128 11:40:47.482088   44543 logs.go:279] 0 containers: []
	W0128 11:40:47.482102   44543 logs.go:281] No container was found matching "kube-scheduler"
	I0128 11:40:47.482170   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0128 11:40:47.545231   44543 logs.go:279] 0 containers: []
	W0128 11:40:47.545247   44543 logs.go:281] No container was found matching "kube-proxy"
	I0128 11:40:47.545316   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0128 11:40:47.569483   44543 logs.go:279] 0 containers: []
	W0128 11:40:47.569496   44543 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0128 11:40:47.569567   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0128 11:40:47.593366   44543 logs.go:279] 0 containers: []
	W0128 11:40:47.593381   44543 logs.go:281] No container was found matching "storage-provisioner"
	I0128 11:40:47.593455   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0128 11:40:47.617731   44543 logs.go:279] 0 containers: []
	W0128 11:40:47.617745   44543 logs.go:281] No container was found matching "kube-controller-manager"
	I0128 11:40:47.617751   44543 logs.go:124] Gathering logs for kubelet ...
	I0128 11:40:47.617758   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0128 11:40:47.656021   44543 logs.go:124] Gathering logs for dmesg ...
	I0128 11:40:47.656037   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0128 11:40:47.668664   44543 logs.go:124] Gathering logs for describe nodes ...
	I0128 11:40:47.668678   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0128 11:40:47.723161   44543 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0128 11:40:47.723176   44543 logs.go:124] Gathering logs for Docker ...
	I0128 11:40:47.723183   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0128 11:40:47.738990   44543 logs.go:124] Gathering logs for container status ...
	I0128 11:40:47.739006   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0128 11:40:49.792093   44543 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.053069579s)
	I0128 11:40:52.292792   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:40:52.381839   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0128 11:40:52.406490   44543 logs.go:279] 0 containers: []
	W0128 11:40:52.406506   44543 logs.go:281] No container was found matching "kube-apiserver"
	I0128 11:40:52.406603   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0128 11:40:52.439926   44543 logs.go:279] 0 containers: []
	W0128 11:40:52.439943   44543 logs.go:281] No container was found matching "etcd"
	I0128 11:40:52.440024   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0128 11:40:52.473789   44543 logs.go:279] 0 containers: []
	W0128 11:40:52.473802   44543 logs.go:281] No container was found matching "coredns"
	I0128 11:40:52.473872   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0128 11:40:52.496897   44543 logs.go:279] 0 containers: []
	W0128 11:40:52.496912   44543 logs.go:281] No container was found matching "kube-scheduler"
	I0128 11:40:52.496979   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0128 11:40:52.521219   44543 logs.go:279] 0 containers: []
	W0128 11:40:52.521235   44543 logs.go:281] No container was found matching "kube-proxy"
	I0128 11:40:52.521316   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0128 11:40:52.551156   44543 logs.go:279] 0 containers: []
	W0128 11:40:52.551175   44543 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0128 11:40:52.551255   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0128 11:40:52.576904   44543 logs.go:279] 0 containers: []
	W0128 11:40:52.576919   44543 logs.go:281] No container was found matching "storage-provisioner"
	I0128 11:40:52.576987   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0128 11:40:52.599683   44543 logs.go:279] 0 containers: []
	W0128 11:40:52.599696   44543 logs.go:281] No container was found matching "kube-controller-manager"
	I0128 11:40:52.599703   44543 logs.go:124] Gathering logs for kubelet ...
	I0128 11:40:52.599710   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0128 11:40:52.642716   44543 logs.go:124] Gathering logs for dmesg ...
	I0128 11:40:52.642737   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0128 11:40:52.656946   44543 logs.go:124] Gathering logs for describe nodes ...
	I0128 11:40:52.656963   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0128 11:40:52.724211   44543 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0128 11:40:52.724227   44543 logs.go:124] Gathering logs for Docker ...
	I0128 11:40:52.724236   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0128 11:40:52.746113   44543 logs.go:124] Gathering logs for container status ...
	I0128 11:40:52.746139   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0128 11:40:54.803677   44543 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.057519829s)
	I0128 11:40:57.306012   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:40:57.383515   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0128 11:40:57.409188   44543 logs.go:279] 0 containers: []
	W0128 11:40:57.409214   44543 logs.go:281] No container was found matching "kube-apiserver"
	I0128 11:40:57.409339   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0128 11:40:57.443105   44543 logs.go:279] 0 containers: []
	W0128 11:40:57.443123   44543 logs.go:281] No container was found matching "etcd"
	I0128 11:40:57.443236   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0128 11:40:57.482730   44543 logs.go:279] 0 containers: []
	W0128 11:40:57.482759   44543 logs.go:281] No container was found matching "coredns"
	I0128 11:40:57.482841   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0128 11:40:57.515319   44543 logs.go:279] 0 containers: []
	W0128 11:40:57.515336   44543 logs.go:281] No container was found matching "kube-scheduler"
	I0128 11:40:57.515424   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0128 11:40:57.546556   44543 logs.go:279] 0 containers: []
	W0128 11:40:57.546570   44543 logs.go:281] No container was found matching "kube-proxy"
	I0128 11:40:57.546650   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0128 11:40:57.575219   44543 logs.go:279] 0 containers: []
	W0128 11:40:57.575231   44543 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0128 11:40:57.575313   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0128 11:40:57.601407   44543 logs.go:279] 0 containers: []
	W0128 11:40:57.601424   44543 logs.go:281] No container was found matching "storage-provisioner"
	I0128 11:40:57.601498   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0128 11:40:57.628229   44543 logs.go:279] 0 containers: []
	W0128 11:40:57.628246   44543 logs.go:281] No container was found matching "kube-controller-manager"
	I0128 11:40:57.628253   44543 logs.go:124] Gathering logs for kubelet ...
	I0128 11:40:57.628260   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0128 11:40:57.677438   44543 logs.go:124] Gathering logs for dmesg ...
	I0128 11:40:57.677455   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0128 11:40:57.690197   44543 logs.go:124] Gathering logs for describe nodes ...
	I0128 11:40:57.690213   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0128 11:40:57.762716   44543 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0128 11:40:57.762728   44543 logs.go:124] Gathering logs for Docker ...
	I0128 11:40:57.762735   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0128 11:40:57.780284   44543 logs.go:124] Gathering logs for container status ...
	I0128 11:40:57.780298   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0128 11:40:59.838172   44543 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.057855892s)
	I0128 11:41:02.338469   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:41:02.382435   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0128 11:41:02.406446   44543 logs.go:279] 0 containers: []
	W0128 11:41:02.406460   44543 logs.go:281] No container was found matching "kube-apiserver"
	I0128 11:41:02.406543   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0128 11:41:02.431565   44543 logs.go:279] 0 containers: []
	W0128 11:41:02.431578   44543 logs.go:281] No container was found matching "etcd"
	I0128 11:41:02.431647   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0128 11:41:02.464204   44543 logs.go:279] 0 containers: []
	W0128 11:41:02.464218   44543 logs.go:281] No container was found matching "coredns"
	I0128 11:41:02.464288   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0128 11:41:02.490358   44543 logs.go:279] 0 containers: []
	W0128 11:41:02.490373   44543 logs.go:281] No container was found matching "kube-scheduler"
	I0128 11:41:02.490444   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0128 11:41:02.516477   44543 logs.go:279] 0 containers: []
	W0128 11:41:02.516492   44543 logs.go:281] No container was found matching "kube-proxy"
	I0128 11:41:02.516574   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0128 11:41:02.544801   44543 logs.go:279] 0 containers: []
	W0128 11:41:02.544816   44543 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0128 11:41:02.544900   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0128 11:41:02.568812   44543 logs.go:279] 0 containers: []
	W0128 11:41:02.568826   44543 logs.go:281] No container was found matching "storage-provisioner"
	I0128 11:41:02.568896   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0128 11:41:02.595511   44543 logs.go:279] 0 containers: []
	W0128 11:41:02.595526   44543 logs.go:281] No container was found matching "kube-controller-manager"
	I0128 11:41:02.595533   44543 logs.go:124] Gathering logs for kubelet ...
	I0128 11:41:02.595542   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0128 11:41:02.637098   44543 logs.go:124] Gathering logs for dmesg ...
	I0128 11:41:02.637120   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0128 11:41:02.651984   44543 logs.go:124] Gathering logs for describe nodes ...
	I0128 11:41:02.652001   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0128 11:41:02.731148   44543 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0128 11:41:02.731160   44543 logs.go:124] Gathering logs for Docker ...
	I0128 11:41:02.731172   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0128 11:41:02.749039   44543 logs.go:124] Gathering logs for container status ...
	I0128 11:41:02.749054   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0128 11:41:04.803415   44543 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054340567s)
	I0128 11:41:07.303650   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:41:07.381870   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0128 11:41:07.406721   44543 logs.go:279] 0 containers: []
	W0128 11:41:07.406736   44543 logs.go:281] No container was found matching "kube-apiserver"
	I0128 11:41:07.406803   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0128 11:41:07.430438   44543 logs.go:279] 0 containers: []
	W0128 11:41:07.430452   44543 logs.go:281] No container was found matching "etcd"
	I0128 11:41:07.430520   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0128 11:41:07.453634   44543 logs.go:279] 0 containers: []
	W0128 11:41:07.453647   44543 logs.go:281] No container was found matching "coredns"
	I0128 11:41:07.453716   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0128 11:41:07.477704   44543 logs.go:279] 0 containers: []
	W0128 11:41:07.477717   44543 logs.go:281] No container was found matching "kube-scheduler"
	I0128 11:41:07.477787   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0128 11:41:07.501059   44543 logs.go:279] 0 containers: []
	W0128 11:41:07.501073   44543 logs.go:281] No container was found matching "kube-proxy"
	I0128 11:41:07.501141   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0128 11:41:07.523792   44543 logs.go:279] 0 containers: []
	W0128 11:41:07.523804   44543 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0128 11:41:07.523873   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0128 11:41:07.546465   44543 logs.go:279] 0 containers: []
	W0128 11:41:07.546478   44543 logs.go:281] No container was found matching "storage-provisioner"
	I0128 11:41:07.546551   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0128 11:41:07.570502   44543 logs.go:279] 0 containers: []
	W0128 11:41:07.570514   44543 logs.go:281] No container was found matching "kube-controller-manager"
	I0128 11:41:07.570521   44543 logs.go:124] Gathering logs for container status ...
	I0128 11:41:07.570528   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0128 11:41:09.623028   44543 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.0524835s)
	I0128 11:41:09.623151   44543 logs.go:124] Gathering logs for kubelet ...
	I0128 11:41:09.623163   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0128 11:41:09.662771   44543 logs.go:124] Gathering logs for dmesg ...
	I0128 11:41:09.662788   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0128 11:41:09.675954   44543 logs.go:124] Gathering logs for describe nodes ...
	I0128 11:41:09.675969   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0128 11:41:09.732431   44543 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0128 11:41:09.732442   44543 logs.go:124] Gathering logs for Docker ...
	I0128 11:41:09.732451   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0128 11:41:12.249664   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:41:12.381609   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0128 11:41:12.406793   44543 logs.go:279] 0 containers: []
	W0128 11:41:12.406806   44543 logs.go:281] No container was found matching "kube-apiserver"
	I0128 11:41:12.406889   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0128 11:41:12.436195   44543 logs.go:279] 0 containers: []
	W0128 11:41:12.436207   44543 logs.go:281] No container was found matching "etcd"
	I0128 11:41:12.436316   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0128 11:41:12.468690   44543 logs.go:279] 0 containers: []
	W0128 11:41:12.468702   44543 logs.go:281] No container was found matching "coredns"
	I0128 11:41:12.468767   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0128 11:41:12.524290   44543 logs.go:279] 0 containers: []
	W0128 11:41:12.524306   44543 logs.go:281] No container was found matching "kube-scheduler"
	I0128 11:41:12.524413   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0128 11:41:12.552110   44543 logs.go:279] 0 containers: []
	W0128 11:41:12.552123   44543 logs.go:281] No container was found matching "kube-proxy"
	I0128 11:41:12.552195   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0128 11:41:12.578379   44543 logs.go:279] 0 containers: []
	W0128 11:41:12.578400   44543 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0128 11:41:12.578471   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0128 11:41:12.604193   44543 logs.go:279] 0 containers: []
	W0128 11:41:12.604206   44543 logs.go:281] No container was found matching "storage-provisioner"
	I0128 11:41:12.604273   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0128 11:41:12.629628   44543 logs.go:279] 0 containers: []
	W0128 11:41:12.629641   44543 logs.go:281] No container was found matching "kube-controller-manager"
	I0128 11:41:12.629649   44543 logs.go:124] Gathering logs for container status ...
	I0128 11:41:12.629660   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0128 11:41:14.681633   44543 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.051954193s)
	I0128 11:41:14.681744   44543 logs.go:124] Gathering logs for kubelet ...
	I0128 11:41:14.681752   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0128 11:41:14.725669   44543 logs.go:124] Gathering logs for dmesg ...
	I0128 11:41:14.725690   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0128 11:41:14.739421   44543 logs.go:124] Gathering logs for describe nodes ...
	I0128 11:41:14.739437   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0128 11:41:14.801448   44543 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0128 11:41:14.801462   44543 logs.go:124] Gathering logs for Docker ...
	I0128 11:41:14.801475   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0128 11:41:17.317372   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:41:17.381510   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0128 11:41:17.408071   44543 logs.go:279] 0 containers: []
	W0128 11:41:17.408085   44543 logs.go:281] No container was found matching "kube-apiserver"
	I0128 11:41:17.408148   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0128 11:41:17.434535   44543 logs.go:279] 0 containers: []
	W0128 11:41:17.434549   44543 logs.go:281] No container was found matching "etcd"
	I0128 11:41:17.434618   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0128 11:41:17.460732   44543 logs.go:279] 0 containers: []
	W0128 11:41:17.460744   44543 logs.go:281] No container was found matching "coredns"
	I0128 11:41:17.460796   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0128 11:41:17.486464   44543 logs.go:279] 0 containers: []
	W0128 11:41:17.486477   44543 logs.go:281] No container was found matching "kube-scheduler"
	I0128 11:41:17.486556   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0128 11:41:17.512203   44543 logs.go:279] 0 containers: []
	W0128 11:41:17.512214   44543 logs.go:281] No container was found matching "kube-proxy"
	I0128 11:41:17.512278   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0128 11:41:17.539536   44543 logs.go:279] 0 containers: []
	W0128 11:41:17.539564   44543 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0128 11:41:17.539636   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0128 11:41:17.565898   44543 logs.go:279] 0 containers: []
	W0128 11:41:17.565913   44543 logs.go:281] No container was found matching "storage-provisioner"
	I0128 11:41:17.565986   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0128 11:41:17.592057   44543 logs.go:279] 0 containers: []
	W0128 11:41:17.592071   44543 logs.go:281] No container was found matching "kube-controller-manager"
	I0128 11:41:17.592078   44543 logs.go:124] Gathering logs for describe nodes ...
	I0128 11:41:17.592086   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0128 11:41:17.651377   44543 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0128 11:41:17.651388   44543 logs.go:124] Gathering logs for Docker ...
	I0128 11:41:17.651395   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0128 11:41:17.669372   44543 logs.go:124] Gathering logs for container status ...
	I0128 11:41:17.669395   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0128 11:41:19.723605   44543 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.054188673s)
	I0128 11:41:19.723711   44543 logs.go:124] Gathering logs for kubelet ...
	I0128 11:41:19.723718   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0128 11:41:19.766081   44543 logs.go:124] Gathering logs for dmesg ...
	I0128 11:41:19.766100   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0128 11:41:22.282354   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:41:22.381557   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0128 11:41:22.413697   44543 logs.go:279] 0 containers: []
	W0128 11:41:22.413715   44543 logs.go:281] No container was found matching "kube-apiserver"
	I0128 11:41:22.413840   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0128 11:41:22.441094   44543 logs.go:279] 0 containers: []
	W0128 11:41:22.441119   44543 logs.go:281] No container was found matching "etcd"
	I0128 11:41:22.441223   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0128 11:41:22.471980   44543 logs.go:279] 0 containers: []
	W0128 11:41:22.471995   44543 logs.go:281] No container was found matching "coredns"
	I0128 11:41:22.472081   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0128 11:41:22.505580   44543 logs.go:279] 0 containers: []
	W0128 11:41:22.505598   44543 logs.go:281] No container was found matching "kube-scheduler"
	I0128 11:41:22.505694   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0128 11:41:22.534665   44543 logs.go:279] 0 containers: []
	W0128 11:41:22.534690   44543 logs.go:281] No container was found matching "kube-proxy"
	I0128 11:41:22.534786   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0128 11:41:22.565200   44543 logs.go:279] 0 containers: []
	W0128 11:41:22.565215   44543 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0128 11:41:22.565293   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0128 11:41:22.597159   44543 logs.go:279] 0 containers: []
	W0128 11:41:22.597176   44543 logs.go:281] No container was found matching "storage-provisioner"
	I0128 11:41:22.597273   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0128 11:41:22.627157   44543 logs.go:279] 0 containers: []
	W0128 11:41:22.627171   44543 logs.go:281] No container was found matching "kube-controller-manager"
	I0128 11:41:22.627178   44543 logs.go:124] Gathering logs for dmesg ...
	I0128 11:41:22.627187   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0128 11:41:22.642869   44543 logs.go:124] Gathering logs for describe nodes ...
	I0128 11:41:22.642885   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0128 11:41:22.725614   44543 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0128 11:41:22.725627   44543 logs.go:124] Gathering logs for Docker ...
	I0128 11:41:22.725638   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0128 11:41:22.745146   44543 logs.go:124] Gathering logs for container status ...
	I0128 11:41:22.745163   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0128 11:41:24.802091   44543 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.05691147s)
	I0128 11:41:24.802226   44543 logs.go:124] Gathering logs for kubelet ...
	I0128 11:41:24.802237   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0128 11:41:27.350147   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:41:27.381516   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0128 11:41:27.407207   44543 logs.go:279] 0 containers: []
	W0128 11:41:27.407224   44543 logs.go:281] No container was found matching "kube-apiserver"
	I0128 11:41:27.407299   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0128 11:41:27.433127   44543 logs.go:279] 0 containers: []
	W0128 11:41:27.433141   44543 logs.go:281] No container was found matching "etcd"
	I0128 11:41:27.433218   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0128 11:41:27.459478   44543 logs.go:279] 0 containers: []
	W0128 11:41:27.459492   44543 logs.go:281] No container was found matching "coredns"
	I0128 11:41:27.459566   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0128 11:41:27.486202   44543 logs.go:279] 0 containers: []
	W0128 11:41:27.486218   44543 logs.go:281] No container was found matching "kube-scheduler"
	I0128 11:41:27.486293   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0128 11:41:27.512560   44543 logs.go:279] 0 containers: []
	W0128 11:41:27.512596   44543 logs.go:281] No container was found matching "kube-proxy"
	I0128 11:41:27.512667   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0128 11:41:27.539331   44543 logs.go:279] 0 containers: []
	W0128 11:41:27.539344   44543 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0128 11:41:27.539417   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0128 11:41:27.565528   44543 logs.go:279] 0 containers: []
	W0128 11:41:27.565546   44543 logs.go:281] No container was found matching "storage-provisioner"
	I0128 11:41:27.565616   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0128 11:41:27.596689   44543 logs.go:279] 0 containers: []
	W0128 11:41:27.596707   44543 logs.go:281] No container was found matching "kube-controller-manager"
	I0128 11:41:27.596716   44543 logs.go:124] Gathering logs for kubelet ...
	I0128 11:41:27.596725   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0128 11:41:27.644669   44543 logs.go:124] Gathering logs for dmesg ...
	I0128 11:41:27.644691   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0128 11:41:27.662168   44543 logs.go:124] Gathering logs for describe nodes ...
	I0128 11:41:27.662187   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0128 11:41:27.755386   44543 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0128 11:41:27.755404   44543 logs.go:124] Gathering logs for Docker ...
	I0128 11:41:27.755421   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0128 11:41:27.777621   44543 logs.go:124] Gathering logs for container status ...
	I0128 11:41:27.777638   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0128 11:41:29.835931   44543 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.058274241s)
	I0128 11:41:32.336172   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:41:32.381920   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0128 11:41:32.406851   44543 logs.go:279] 0 containers: []
	W0128 11:41:32.406865   44543 logs.go:281] No container was found matching "kube-apiserver"
	I0128 11:41:32.406936   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0128 11:41:32.432516   44543 logs.go:279] 0 containers: []
	W0128 11:41:32.432531   44543 logs.go:281] No container was found matching "etcd"
	I0128 11:41:32.432601   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0128 11:41:32.463597   44543 logs.go:279] 0 containers: []
	W0128 11:41:32.463611   44543 logs.go:281] No container was found matching "coredns"
	I0128 11:41:32.463683   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0128 11:41:32.487789   44543 logs.go:279] 0 containers: []
	W0128 11:41:32.487801   44543 logs.go:281] No container was found matching "kube-scheduler"
	I0128 11:41:32.487869   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0128 11:41:32.512640   44543 logs.go:279] 0 containers: []
	W0128 11:41:32.512654   44543 logs.go:281] No container was found matching "kube-proxy"
	I0128 11:41:32.512723   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0128 11:41:32.541499   44543 logs.go:279] 0 containers: []
	W0128 11:41:32.541513   44543 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0128 11:41:32.541585   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0128 11:41:32.567145   44543 logs.go:279] 0 containers: []
	W0128 11:41:32.567160   44543 logs.go:281] No container was found matching "storage-provisioner"
	I0128 11:41:32.567296   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0128 11:41:32.596484   44543 logs.go:279] 0 containers: []
	W0128 11:41:32.596501   44543 logs.go:281] No container was found matching "kube-controller-manager"
	I0128 11:41:32.596510   44543 logs.go:124] Gathering logs for dmesg ...
	I0128 11:41:32.596519   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0128 11:41:32.611085   44543 logs.go:124] Gathering logs for describe nodes ...
	I0128 11:41:32.611099   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0128 11:41:32.673304   44543 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0128 11:41:32.673318   44543 logs.go:124] Gathering logs for Docker ...
	I0128 11:41:32.673327   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0128 11:41:32.690153   44543 logs.go:124] Gathering logs for container status ...
	I0128 11:41:32.690167   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0128 11:41:34.741488   44543 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.051296281s)
	I0128 11:41:34.741612   44543 logs.go:124] Gathering logs for kubelet ...
	I0128 11:41:34.741621   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0128 11:41:37.285534   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:41:37.381662   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0128 11:41:37.406302   44543 logs.go:279] 0 containers: []
	W0128 11:41:37.406316   44543 logs.go:281] No container was found matching "kube-apiserver"
	I0128 11:41:37.406383   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0128 11:41:37.432991   44543 logs.go:279] 0 containers: []
	W0128 11:41:37.433005   44543 logs.go:281] No container was found matching "etcd"
	I0128 11:41:37.433074   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0128 11:41:37.455648   44543 logs.go:279] 0 containers: []
	W0128 11:41:37.455662   44543 logs.go:281] No container was found matching "coredns"
	I0128 11:41:37.455746   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0128 11:41:37.512734   44543 logs.go:279] 0 containers: []
	W0128 11:41:37.512768   44543 logs.go:281] No container was found matching "kube-scheduler"
	I0128 11:41:37.512845   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0128 11:41:37.540256   44543 logs.go:279] 0 containers: []
	W0128 11:41:37.540270   44543 logs.go:281] No container was found matching "kube-proxy"
	I0128 11:41:37.540346   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0128 11:41:37.564217   44543 logs.go:279] 0 containers: []
	W0128 11:41:37.564231   44543 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0128 11:41:37.564306   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0128 11:41:37.588776   44543 logs.go:279] 0 containers: []
	W0128 11:41:37.588789   44543 logs.go:281] No container was found matching "storage-provisioner"
	I0128 11:41:37.588844   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0128 11:41:37.611693   44543 logs.go:279] 0 containers: []
	W0128 11:41:37.611707   44543 logs.go:281] No container was found matching "kube-controller-manager"
	I0128 11:41:37.611714   44543 logs.go:124] Gathering logs for container status ...
	I0128 11:41:37.611721   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0128 11:41:39.661877   44543 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.050124623s)
	I0128 11:41:39.661983   44543 logs.go:124] Gathering logs for kubelet ...
	I0128 11:41:39.661990   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0128 11:41:39.700309   44543 logs.go:124] Gathering logs for dmesg ...
	I0128 11:41:39.700333   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0128 11:41:39.713347   44543 logs.go:124] Gathering logs for describe nodes ...
	I0128 11:41:39.713363   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0128 11:41:39.769176   44543 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0128 11:41:39.769187   44543 logs.go:124] Gathering logs for Docker ...
	I0128 11:41:39.769193   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0128 11:41:42.285077   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:41:42.381794   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0128 11:41:42.407013   44543 logs.go:279] 0 containers: []
	W0128 11:41:42.407027   44543 logs.go:281] No container was found matching "kube-apiserver"
	I0128 11:41:42.407115   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0128 11:41:42.433493   44543 logs.go:279] 0 containers: []
	W0128 11:41:42.433514   44543 logs.go:281] No container was found matching "etcd"
	I0128 11:41:42.433598   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0128 11:41:42.457344   44543 logs.go:279] 0 containers: []
	W0128 11:41:42.457358   44543 logs.go:281] No container was found matching "coredns"
	I0128 11:41:42.457433   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0128 11:41:42.482497   44543 logs.go:279] 0 containers: []
	W0128 11:41:42.482515   44543 logs.go:281] No container was found matching "kube-scheduler"
	I0128 11:41:42.482605   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0128 11:41:42.508121   44543 logs.go:279] 0 containers: []
	W0128 11:41:42.508135   44543 logs.go:281] No container was found matching "kube-proxy"
	I0128 11:41:42.508215   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0128 11:41:42.533548   44543 logs.go:279] 0 containers: []
	W0128 11:41:42.533563   44543 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0128 11:41:42.533633   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0128 11:41:42.557743   44543 logs.go:279] 0 containers: []
	W0128 11:41:42.557756   44543 logs.go:281] No container was found matching "storage-provisioner"
	I0128 11:41:42.557825   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0128 11:41:42.581115   44543 logs.go:279] 0 containers: []
	W0128 11:41:42.581132   44543 logs.go:281] No container was found matching "kube-controller-manager"
	I0128 11:41:42.581139   44543 logs.go:124] Gathering logs for kubelet ...
	I0128 11:41:42.581146   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0128 11:41:42.621621   44543 logs.go:124] Gathering logs for dmesg ...
	I0128 11:41:42.621637   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0128 11:41:42.634835   44543 logs.go:124] Gathering logs for describe nodes ...
	I0128 11:41:42.634849   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0128 11:41:42.694759   44543 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0128 11:41:42.694772   44543 logs.go:124] Gathering logs for Docker ...
	I0128 11:41:42.694779   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0128 11:41:42.712049   44543 logs.go:124] Gathering logs for container status ...
	I0128 11:41:42.712066   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0128 11:41:44.764730   44543 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.05264496s)
	I0128 11:41:47.265426   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:41:47.383525   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0128 11:41:47.408878   44543 logs.go:279] 0 containers: []
	W0128 11:41:47.408893   44543 logs.go:281] No container was found matching "kube-apiserver"
	I0128 11:41:47.408971   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0128 11:41:47.433793   44543 logs.go:279] 0 containers: []
	W0128 11:41:47.433807   44543 logs.go:281] No container was found matching "etcd"
	I0128 11:41:47.433877   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0128 11:41:47.459172   44543 logs.go:279] 0 containers: []
	W0128 11:41:47.459186   44543 logs.go:281] No container was found matching "coredns"
	I0128 11:41:47.459262   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0128 11:41:47.483526   44543 logs.go:279] 0 containers: []
	W0128 11:41:47.483540   44543 logs.go:281] No container was found matching "kube-scheduler"
	I0128 11:41:47.483613   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0128 11:41:47.507890   44543 logs.go:279] 0 containers: []
	W0128 11:41:47.507904   44543 logs.go:281] No container was found matching "kube-proxy"
	I0128 11:41:47.507972   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0128 11:41:47.532999   44543 logs.go:279] 0 containers: []
	W0128 11:41:47.533014   44543 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0128 11:41:47.533083   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0128 11:41:47.556013   44543 logs.go:279] 0 containers: []
	W0128 11:41:47.556028   44543 logs.go:281] No container was found matching "storage-provisioner"
	I0128 11:41:47.556106   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0128 11:41:47.583773   44543 logs.go:279] 0 containers: []
	W0128 11:41:47.583787   44543 logs.go:281] No container was found matching "kube-controller-manager"
	I0128 11:41:47.583797   44543 logs.go:124] Gathering logs for kubelet ...
	I0128 11:41:47.583804   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0128 11:41:47.623375   44543 logs.go:124] Gathering logs for dmesg ...
	I0128 11:41:47.623390   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0128 11:41:47.636009   44543 logs.go:124] Gathering logs for describe nodes ...
	I0128 11:41:47.636025   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0128 11:41:47.709255   44543 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0128 11:41:47.709270   44543 logs.go:124] Gathering logs for Docker ...
	I0128 11:41:47.709277   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0128 11:41:47.727666   44543 logs.go:124] Gathering logs for container status ...
	I0128 11:41:47.727683   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0128 11:41:49.781463   44543 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.053758174s)
	I0128 11:41:52.282967   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:41:52.382184   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0128 11:41:52.407084   44543 logs.go:279] 0 containers: []
	W0128 11:41:52.407098   44543 logs.go:281] No container was found matching "kube-apiserver"
	I0128 11:41:52.407174   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0128 11:41:52.430857   44543 logs.go:279] 0 containers: []
	W0128 11:41:52.430871   44543 logs.go:281] No container was found matching "etcd"
	I0128 11:41:52.430939   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0128 11:41:52.457217   44543 logs.go:279] 0 containers: []
	W0128 11:41:52.457232   44543 logs.go:281] No container was found matching "coredns"
	I0128 11:41:52.457317   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0128 11:41:52.511112   44543 logs.go:279] 0 containers: []
	W0128 11:41:52.511124   44543 logs.go:281] No container was found matching "kube-scheduler"
	I0128 11:41:52.511192   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0128 11:41:52.535688   44543 logs.go:279] 0 containers: []
	W0128 11:41:52.535702   44543 logs.go:281] No container was found matching "kube-proxy"
	I0128 11:41:52.535768   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0128 11:41:52.558839   44543 logs.go:279] 0 containers: []
	W0128 11:41:52.558854   44543 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0128 11:41:52.558921   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0128 11:41:52.582248   44543 logs.go:279] 0 containers: []
	W0128 11:41:52.582261   44543 logs.go:281] No container was found matching "storage-provisioner"
	I0128 11:41:52.582327   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0128 11:41:52.605926   44543 logs.go:279] 0 containers: []
	W0128 11:41:52.605941   44543 logs.go:281] No container was found matching "kube-controller-manager"
	I0128 11:41:52.605949   44543 logs.go:124] Gathering logs for container status ...
	I0128 11:41:52.605956   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0128 11:41:54.655892   44543 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.049912254s)
	I0128 11:41:54.655998   44543 logs.go:124] Gathering logs for kubelet ...
	I0128 11:41:54.656004   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0128 11:41:54.694634   44543 logs.go:124] Gathering logs for dmesg ...
	I0128 11:41:54.694653   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0128 11:41:54.707372   44543 logs.go:124] Gathering logs for describe nodes ...
	I0128 11:41:54.707385   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0128 11:41:54.765791   44543 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0128 11:41:54.765802   44543 logs.go:124] Gathering logs for Docker ...
	I0128 11:41:54.765810   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0128 11:41:57.281604   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:41:57.382894   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0128 11:41:57.408599   44543 logs.go:279] 0 containers: []
	W0128 11:41:57.408613   44543 logs.go:281] No container was found matching "kube-apiserver"
	I0128 11:41:57.408679   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0128 11:41:57.431377   44543 logs.go:279] 0 containers: []
	W0128 11:41:57.431391   44543 logs.go:281] No container was found matching "etcd"
	I0128 11:41:57.431460   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0128 11:41:57.455571   44543 logs.go:279] 0 containers: []
	W0128 11:41:57.455585   44543 logs.go:281] No container was found matching "coredns"
	I0128 11:41:57.455651   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0128 11:41:57.478974   44543 logs.go:279] 0 containers: []
	W0128 11:41:57.478988   44543 logs.go:281] No container was found matching "kube-scheduler"
	I0128 11:41:57.479056   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0128 11:41:57.502433   44543 logs.go:279] 0 containers: []
	W0128 11:41:57.502447   44543 logs.go:281] No container was found matching "kube-proxy"
	I0128 11:41:57.502518   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0128 11:41:57.526130   44543 logs.go:279] 0 containers: []
	W0128 11:41:57.526145   44543 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0128 11:41:57.526215   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0128 11:41:57.549379   44543 logs.go:279] 0 containers: []
	W0128 11:41:57.549392   44543 logs.go:281] No container was found matching "storage-provisioner"
	I0128 11:41:57.549460   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0128 11:41:57.572563   44543 logs.go:279] 0 containers: []
	W0128 11:41:57.572578   44543 logs.go:281] No container was found matching "kube-controller-manager"
	I0128 11:41:57.572585   44543 logs.go:124] Gathering logs for kubelet ...
	I0128 11:41:57.572592   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0128 11:41:57.611619   44543 logs.go:124] Gathering logs for dmesg ...
	I0128 11:41:57.611635   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0128 11:41:57.624375   44543 logs.go:124] Gathering logs for describe nodes ...
	I0128 11:41:57.624388   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0128 11:41:57.679990   44543 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0128 11:41:57.680002   44543 logs.go:124] Gathering logs for Docker ...
	I0128 11:41:57.680008   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0128 11:41:57.696274   44543 logs.go:124] Gathering logs for container status ...
	I0128 11:41:57.696288   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0128 11:41:59.748971   44543 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.052665717s)
	I0128 11:42:02.249785   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:42:02.381879   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0128 11:42:02.406582   44543 logs.go:279] 0 containers: []
	W0128 11:42:02.406594   44543 logs.go:281] No container was found matching "kube-apiserver"
	I0128 11:42:02.406661   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0128 11:42:02.429918   44543 logs.go:279] 0 containers: []
	W0128 11:42:02.429931   44543 logs.go:281] No container was found matching "etcd"
	I0128 11:42:02.430002   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0128 11:42:02.453127   44543 logs.go:279] 0 containers: []
	W0128 11:42:02.453141   44543 logs.go:281] No container was found matching "coredns"
	I0128 11:42:02.453210   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0128 11:42:02.477180   44543 logs.go:279] 0 containers: []
	W0128 11:42:02.477193   44543 logs.go:281] No container was found matching "kube-scheduler"
	I0128 11:42:02.477262   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0128 11:42:02.500626   44543 logs.go:279] 0 containers: []
	W0128 11:42:02.500639   44543 logs.go:281] No container was found matching "kube-proxy"
	I0128 11:42:02.500707   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0128 11:42:02.526336   44543 logs.go:279] 0 containers: []
	W0128 11:42:02.526354   44543 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0128 11:42:02.526450   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0128 11:42:02.551192   44543 logs.go:279] 0 containers: []
	W0128 11:42:02.551206   44543 logs.go:281] No container was found matching "storage-provisioner"
	I0128 11:42:02.551275   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0128 11:42:02.577038   44543 logs.go:279] 0 containers: []
	W0128 11:42:02.577053   44543 logs.go:281] No container was found matching "kube-controller-manager"
	I0128 11:42:02.577060   44543 logs.go:124] Gathering logs for kubelet ...
	I0128 11:42:02.577066   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0128 11:42:02.616010   44543 logs.go:124] Gathering logs for dmesg ...
	I0128 11:42:02.616027   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0128 11:42:02.628773   44543 logs.go:124] Gathering logs for describe nodes ...
	I0128 11:42:02.628788   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0128 11:42:02.684637   44543 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0128 11:42:02.684648   44543 logs.go:124] Gathering logs for Docker ...
	I0128 11:42:02.684654   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0128 11:42:02.700014   44543 logs.go:124] Gathering logs for container status ...
	I0128 11:42:02.700026   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0128 11:42:04.759331   44543 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.05928768s)
	I0128 11:42:07.260363   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:42:07.381999   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0128 11:42:07.406954   44543 logs.go:279] 0 containers: []
	W0128 11:42:07.406967   44543 logs.go:281] No container was found matching "kube-apiserver"
	I0128 11:42:07.407037   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0128 11:42:07.430488   44543 logs.go:279] 0 containers: []
	W0128 11:42:07.430507   44543 logs.go:281] No container was found matching "etcd"
	I0128 11:42:07.430590   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0128 11:42:07.455069   44543 logs.go:279] 0 containers: []
	W0128 11:42:07.455085   44543 logs.go:281] No container was found matching "coredns"
	I0128 11:42:07.455160   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0128 11:42:07.508098   44543 logs.go:279] 0 containers: []
	W0128 11:42:07.508115   44543 logs.go:281] No container was found matching "kube-scheduler"
	I0128 11:42:07.508203   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0128 11:42:07.531161   44543 logs.go:279] 0 containers: []
	W0128 11:42:07.531174   44543 logs.go:281] No container was found matching "kube-proxy"
	I0128 11:42:07.531249   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0128 11:42:07.554562   44543 logs.go:279] 0 containers: []
	W0128 11:42:07.554576   44543 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0128 11:42:07.554644   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0128 11:42:07.577751   44543 logs.go:279] 0 containers: []
	W0128 11:42:07.577766   44543 logs.go:281] No container was found matching "storage-provisioner"
	I0128 11:42:07.577838   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0128 11:42:07.602304   44543 logs.go:279] 0 containers: []
	W0128 11:42:07.602316   44543 logs.go:281] No container was found matching "kube-controller-manager"
	I0128 11:42:07.602323   44543 logs.go:124] Gathering logs for dmesg ...
	I0128 11:42:07.602333   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0128 11:42:07.614388   44543 logs.go:124] Gathering logs for describe nodes ...
	I0128 11:42:07.614401   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0128 11:42:07.672297   44543 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0128 11:42:07.672309   44543 logs.go:124] Gathering logs for Docker ...
	I0128 11:42:07.672315   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0128 11:42:07.687606   44543 logs.go:124] Gathering logs for container status ...
	I0128 11:42:07.687620   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0128 11:42:09.736655   44543 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.049017715s)
	I0128 11:42:09.736764   44543 logs.go:124] Gathering logs for kubelet ...
	I0128 11:42:09.736772   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0128 11:42:12.277110   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:42:12.381648   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0128 11:42:12.405329   44543 logs.go:279] 0 containers: []
	W0128 11:42:12.405341   44543 logs.go:281] No container was found matching "kube-apiserver"
	I0128 11:42:12.405408   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0128 11:42:12.428916   44543 logs.go:279] 0 containers: []
	W0128 11:42:12.428930   44543 logs.go:281] No container was found matching "etcd"
	I0128 11:42:12.428998   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0128 11:42:12.452177   44543 logs.go:279] 0 containers: []
	W0128 11:42:12.452190   44543 logs.go:281] No container was found matching "coredns"
	I0128 11:42:12.452260   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0128 11:42:12.478412   44543 logs.go:279] 0 containers: []
	W0128 11:42:12.478427   44543 logs.go:281] No container was found matching "kube-scheduler"
	I0128 11:42:12.478496   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0128 11:42:12.502012   44543 logs.go:279] 0 containers: []
	W0128 11:42:12.502025   44543 logs.go:281] No container was found matching "kube-proxy"
	I0128 11:42:12.502096   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0128 11:42:12.526284   44543 logs.go:279] 0 containers: []
	W0128 11:42:12.526296   44543 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0128 11:42:12.526361   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0128 11:42:12.553232   44543 logs.go:279] 0 containers: []
	W0128 11:42:12.553244   44543 logs.go:281] No container was found matching "storage-provisioner"
	I0128 11:42:12.553306   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0128 11:42:12.577102   44543 logs.go:279] 0 containers: []
	W0128 11:42:12.577116   44543 logs.go:281] No container was found matching "kube-controller-manager"
	I0128 11:42:12.577123   44543 logs.go:124] Gathering logs for container status ...
	I0128 11:42:12.577130   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0128 11:42:14.627392   44543 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.050245494s)
	I0128 11:42:14.627498   44543 logs.go:124] Gathering logs for kubelet ...
	I0128 11:42:14.627504   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0128 11:42:14.665087   44543 logs.go:124] Gathering logs for dmesg ...
	I0128 11:42:14.665100   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0128 11:42:14.677662   44543 logs.go:124] Gathering logs for describe nodes ...
	I0128 11:42:14.677676   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0128 11:42:14.734076   44543 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0128 11:42:14.734087   44543 logs.go:124] Gathering logs for Docker ...
	I0128 11:42:14.734094   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0128 11:42:17.249909   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:42:17.381720   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0128 11:42:17.406168   44543 logs.go:279] 0 containers: []
	W0128 11:42:17.406181   44543 logs.go:281] No container was found matching "kube-apiserver"
	I0128 11:42:17.406249   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0128 11:42:17.428813   44543 logs.go:279] 0 containers: []
	W0128 11:42:17.428827   44543 logs.go:281] No container was found matching "etcd"
	I0128 11:42:17.428895   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0128 11:42:17.451432   44543 logs.go:279] 0 containers: []
	W0128 11:42:17.451445   44543 logs.go:281] No container was found matching "coredns"
	I0128 11:42:17.451511   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0128 11:42:17.475432   44543 logs.go:279] 0 containers: []
	W0128 11:42:17.475445   44543 logs.go:281] No container was found matching "kube-scheduler"
	I0128 11:42:17.475512   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0128 11:42:17.499315   44543 logs.go:279] 0 containers: []
	W0128 11:42:17.499328   44543 logs.go:281] No container was found matching "kube-proxy"
	I0128 11:42:17.499395   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0128 11:42:17.522231   44543 logs.go:279] 0 containers: []
	W0128 11:42:17.522246   44543 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0128 11:42:17.522315   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0128 11:42:17.547336   44543 logs.go:279] 0 containers: []
	W0128 11:42:17.547347   44543 logs.go:281] No container was found matching "storage-provisioner"
	I0128 11:42:17.547438   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0128 11:42:17.570879   44543 logs.go:279] 0 containers: []
	W0128 11:42:17.570892   44543 logs.go:281] No container was found matching "kube-controller-manager"
	I0128 11:42:17.570899   44543 logs.go:124] Gathering logs for Docker ...
	I0128 11:42:17.570908   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0128 11:42:17.586679   44543 logs.go:124] Gathering logs for container status ...
	I0128 11:42:17.586692   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0128 11:42:19.635693   44543 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.048983105s)
	I0128 11:42:19.635797   44543 logs.go:124] Gathering logs for kubelet ...
	I0128 11:42:19.635803   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0128 11:42:19.674688   44543 logs.go:124] Gathering logs for dmesg ...
	I0128 11:42:19.674704   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0128 11:42:19.687283   44543 logs.go:124] Gathering logs for describe nodes ...
	I0128 11:42:19.687300   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0128 11:42:19.742667   44543 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0128 11:42:22.242884   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:42:22.382402   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0128 11:42:22.406801   44543 logs.go:279] 0 containers: []
	W0128 11:42:22.406816   44543 logs.go:281] No container was found matching "kube-apiserver"
	I0128 11:42:22.406901   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0128 11:42:22.430514   44543 logs.go:279] 0 containers: []
	W0128 11:42:22.430530   44543 logs.go:281] No container was found matching "etcd"
	I0128 11:42:22.430609   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0128 11:42:22.454384   44543 logs.go:279] 0 containers: []
	W0128 11:42:22.454411   44543 logs.go:281] No container was found matching "coredns"
	I0128 11:42:22.454491   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0128 11:42:22.510348   44543 logs.go:279] 0 containers: []
	W0128 11:42:22.510361   44543 logs.go:281] No container was found matching "kube-scheduler"
	I0128 11:42:22.510435   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0128 11:42:22.535621   44543 logs.go:279] 0 containers: []
	W0128 11:42:22.535634   44543 logs.go:281] No container was found matching "kube-proxy"
	I0128 11:42:22.535704   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0128 11:42:22.559217   44543 logs.go:279] 0 containers: []
	W0128 11:42:22.559232   44543 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0128 11:42:22.559300   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0128 11:42:22.582645   44543 logs.go:279] 0 containers: []
	W0128 11:42:22.582660   44543 logs.go:281] No container was found matching "storage-provisioner"
	I0128 11:42:22.582728   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0128 11:42:22.606564   44543 logs.go:279] 0 containers: []
	W0128 11:42:22.606578   44543 logs.go:281] No container was found matching "kube-controller-manager"
	I0128 11:42:22.606585   44543 logs.go:124] Gathering logs for container status ...
	I0128 11:42:22.606592   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0128 11:42:24.671006   44543 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.064396801s)
	I0128 11:42:24.671126   44543 logs.go:124] Gathering logs for kubelet ...
	I0128 11:42:24.671135   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0128 11:42:24.713159   44543 logs.go:124] Gathering logs for dmesg ...
	I0128 11:42:24.713176   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0128 11:42:24.726535   44543 logs.go:124] Gathering logs for describe nodes ...
	I0128 11:42:24.726557   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0128 11:42:24.787653   44543 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0128 11:42:24.787663   44543 logs.go:124] Gathering logs for Docker ...
	I0128 11:42:24.787670   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0128 11:42:27.303633   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:42:27.381997   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0128 11:42:27.412222   44543 logs.go:279] 0 containers: []
	W0128 11:42:27.412235   44543 logs.go:281] No container was found matching "kube-apiserver"
	I0128 11:42:27.412293   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0128 11:42:27.438170   44543 logs.go:279] 0 containers: []
	W0128 11:42:27.438184   44543 logs.go:281] No container was found matching "etcd"
	I0128 11:42:27.438237   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0128 11:42:27.464595   44543 logs.go:279] 0 containers: []
	W0128 11:42:27.464613   44543 logs.go:281] No container was found matching "coredns"
	I0128 11:42:27.464692   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0128 11:42:27.491568   44543 logs.go:279] 0 containers: []
	W0128 11:42:27.491595   44543 logs.go:281] No container was found matching "kube-scheduler"
	I0128 11:42:27.491674   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0128 11:42:27.519929   44543 logs.go:279] 0 containers: []
	W0128 11:42:27.519945   44543 logs.go:281] No container was found matching "kube-proxy"
	I0128 11:42:27.520022   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0128 11:42:27.548084   44543 logs.go:279] 0 containers: []
	W0128 11:42:27.548097   44543 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0128 11:42:27.548167   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0128 11:42:27.575185   44543 logs.go:279] 0 containers: []
	W0128 11:42:27.575197   44543 logs.go:281] No container was found matching "storage-provisioner"
	I0128 11:42:27.575257   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0128 11:42:27.602862   44543 logs.go:279] 0 containers: []
	W0128 11:42:27.602875   44543 logs.go:281] No container was found matching "kube-controller-manager"
	I0128 11:42:27.602882   44543 logs.go:124] Gathering logs for Docker ...
	I0128 11:42:27.602892   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0128 11:42:27.619749   44543 logs.go:124] Gathering logs for container status ...
	I0128 11:42:27.619764   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0128 11:42:29.677292   44543 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.057509718s)
	I0128 11:42:29.677401   44543 logs.go:124] Gathering logs for kubelet ...
	I0128 11:42:29.677408   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0128 11:42:29.714556   44543 logs.go:124] Gathering logs for dmesg ...
	I0128 11:42:29.714569   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0128 11:42:29.726688   44543 logs.go:124] Gathering logs for describe nodes ...
	I0128 11:42:29.726700   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0128 11:42:29.782368   44543 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0128 11:42:32.282584   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:42:32.383770   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0128 11:42:32.409668   44543 logs.go:279] 0 containers: []
	W0128 11:42:32.409685   44543 logs.go:281] No container was found matching "kube-apiserver"
	I0128 11:42:32.409755   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0128 11:42:32.433145   44543 logs.go:279] 0 containers: []
	W0128 11:42:32.433158   44543 logs.go:281] No container was found matching "etcd"
	I0128 11:42:32.433226   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0128 11:42:32.456128   44543 logs.go:279] 0 containers: []
	W0128 11:42:32.456141   44543 logs.go:281] No container was found matching "coredns"
	I0128 11:42:32.456212   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0128 11:42:32.479960   44543 logs.go:279] 0 containers: []
	W0128 11:42:32.479974   44543 logs.go:281] No container was found matching "kube-scheduler"
	I0128 11:42:32.480043   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0128 11:42:32.503930   44543 logs.go:279] 0 containers: []
	W0128 11:42:32.503943   44543 logs.go:281] No container was found matching "kube-proxy"
	I0128 11:42:32.504014   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0128 11:42:32.527296   44543 logs.go:279] 0 containers: []
	W0128 11:42:32.527309   44543 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0128 11:42:32.527376   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0128 11:42:32.550191   44543 logs.go:279] 0 containers: []
	W0128 11:42:32.550204   44543 logs.go:281] No container was found matching "storage-provisioner"
	I0128 11:42:32.550270   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0128 11:42:32.573325   44543 logs.go:279] 0 containers: []
	W0128 11:42:32.573339   44543 logs.go:281] No container was found matching "kube-controller-manager"
	I0128 11:42:32.573348   44543 logs.go:124] Gathering logs for describe nodes ...
	I0128 11:42:32.573357   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0128 11:42:32.629253   44543 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0128 11:42:32.629263   44543 logs.go:124] Gathering logs for Docker ...
	I0128 11:42:32.629270   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0128 11:42:32.644685   44543 logs.go:124] Gathering logs for container status ...
	I0128 11:42:32.644698   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0128 11:42:34.692709   44543 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.047993526s)
	I0128 11:42:34.692817   44543 logs.go:124] Gathering logs for kubelet ...
	I0128 11:42:34.692824   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0128 11:42:34.729919   44543 logs.go:124] Gathering logs for dmesg ...
	I0128 11:42:34.729932   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0128 11:42:37.242394   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:42:37.381738   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0128 11:42:37.406482   44543 logs.go:279] 0 containers: []
	W0128 11:42:37.406493   44543 logs.go:281] No container was found matching "kube-apiserver"
	I0128 11:42:37.406563   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0128 11:42:37.433294   44543 logs.go:279] 0 containers: []
	W0128 11:42:37.433314   44543 logs.go:281] No container was found matching "etcd"
	I0128 11:42:37.433388   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0128 11:42:37.457625   44543 logs.go:279] 0 containers: []
	W0128 11:42:37.457641   44543 logs.go:281] No container was found matching "coredns"
	I0128 11:42:37.457710   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0128 11:42:37.518144   44543 logs.go:279] 0 containers: []
	W0128 11:42:37.518158   44543 logs.go:281] No container was found matching "kube-scheduler"
	I0128 11:42:37.518229   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0128 11:42:37.545851   44543 logs.go:279] 0 containers: []
	W0128 11:42:37.545864   44543 logs.go:281] No container was found matching "kube-proxy"
	I0128 11:42:37.545931   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0128 11:42:37.570182   44543 logs.go:279] 0 containers: []
	W0128 11:42:37.570195   44543 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0128 11:42:37.570267   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0128 11:42:37.597017   44543 logs.go:279] 0 containers: []
	W0128 11:42:37.597030   44543 logs.go:281] No container was found matching "storage-provisioner"
	I0128 11:42:37.597100   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0128 11:42:37.622857   44543 logs.go:279] 0 containers: []
	W0128 11:42:37.622870   44543 logs.go:281] No container was found matching "kube-controller-manager"
	I0128 11:42:37.622877   44543 logs.go:124] Gathering logs for kubelet ...
	I0128 11:42:37.622883   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0128 11:42:37.661481   44543 logs.go:124] Gathering logs for dmesg ...
	I0128 11:42:37.661493   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0128 11:42:37.673981   44543 logs.go:124] Gathering logs for describe nodes ...
	I0128 11:42:37.673993   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0128 11:42:37.729193   44543 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0128 11:42:37.729204   44543 logs.go:124] Gathering logs for Docker ...
	I0128 11:42:37.729210   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0128 11:42:37.744548   44543 logs.go:124] Gathering logs for container status ...
	I0128 11:42:37.744559   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0128 11:42:39.795448   44543 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.050870724s)
	I0128 11:42:42.295696   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:42:42.381841   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0128 11:42:42.405737   44543 logs.go:279] 0 containers: []
	W0128 11:42:42.405749   44543 logs.go:281] No container was found matching "kube-apiserver"
	I0128 11:42:42.405820   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0128 11:42:42.430800   44543 logs.go:279] 0 containers: []
	W0128 11:42:42.430812   44543 logs.go:281] No container was found matching "etcd"
	I0128 11:42:42.430877   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0128 11:42:42.456163   44543 logs.go:279] 0 containers: []
	W0128 11:42:42.456176   44543 logs.go:281] No container was found matching "coredns"
	I0128 11:42:42.456260   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0128 11:42:42.482324   44543 logs.go:279] 0 containers: []
	W0128 11:42:42.482339   44543 logs.go:281] No container was found matching "kube-scheduler"
	I0128 11:42:42.482411   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0128 11:42:42.506786   44543 logs.go:279] 0 containers: []
	W0128 11:42:42.506801   44543 logs.go:281] No container was found matching "kube-proxy"
	I0128 11:42:42.506873   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0128 11:42:42.532846   44543 logs.go:279] 0 containers: []
	W0128 11:42:42.532862   44543 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0128 11:42:42.532930   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0128 11:42:42.557080   44543 logs.go:279] 0 containers: []
	W0128 11:42:42.557095   44543 logs.go:281] No container was found matching "storage-provisioner"
	I0128 11:42:42.557165   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0128 11:42:42.584597   44543 logs.go:279] 0 containers: []
	W0128 11:42:42.584629   44543 logs.go:281] No container was found matching "kube-controller-manager"
	I0128 11:42:42.584641   44543 logs.go:124] Gathering logs for kubelet ...
	I0128 11:42:42.584653   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0128 11:42:42.627541   44543 logs.go:124] Gathering logs for dmesg ...
	I0128 11:42:42.627560   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0128 11:42:42.642819   44543 logs.go:124] Gathering logs for describe nodes ...
	I0128 11:42:42.642833   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0128 11:42:42.702889   44543 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0128 11:42:42.702903   44543 logs.go:124] Gathering logs for Docker ...
	I0128 11:42:42.702910   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0128 11:42:42.720162   44543 logs.go:124] Gathering logs for container status ...
	I0128 11:42:42.720176   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0128 11:42:44.772006   44543 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.051811665s)
	I0128 11:42:47.272991   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:42:47.382160   44543 kubeadm.go:637] restartCluster took 4m10.928048519s
	W0128 11:42:47.382318   44543 out.go:239] ! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	I0128 11:42:47.382349   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0128 11:42:47.797020   44543 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0128 11:42:47.806926   44543 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0128 11:42:47.814650   44543 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0128 11:42:47.814700   44543 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0128 11:42:47.822491   44543 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0128 11:42:47.822534   44543 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0128 11:42:47.871357   44543 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0128 11:42:47.871408   44543 kubeadm.go:322] [preflight] Running pre-flight checks
	I0128 11:42:48.169055   44543 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0128 11:42:48.169165   44543 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0128 11:42:48.169287   44543 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0128 11:42:48.396407   44543 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0128 11:42:48.397257   44543 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0128 11:42:48.403754   44543 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0128 11:42:48.464466   44543 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0128 11:42:48.486240   44543 out.go:204]   - Generating certificates and keys ...
	I0128 11:42:48.486314   44543 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0128 11:42:48.486394   44543 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0128 11:42:48.486495   44543 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0128 11:42:48.486568   44543 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0128 11:42:48.486651   44543 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0128 11:42:48.486728   44543 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0128 11:42:48.486820   44543 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0128 11:42:48.486878   44543 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0128 11:42:48.486955   44543 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0128 11:42:48.487057   44543 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0128 11:42:48.487098   44543 kubeadm.go:322] [certs] Using the existing "sa" key
	I0128 11:42:48.487160   44543 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0128 11:42:48.621957   44543 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0128 11:42:48.689789   44543 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0128 11:42:48.859512   44543 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0128 11:42:48.953322   44543 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0128 11:42:48.953809   44543 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0128 11:42:48.975344   44543 out.go:204]   - Booting up control plane ...
	I0128 11:42:48.975447   44543 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0128 11:42:48.975516   44543 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0128 11:42:48.975585   44543 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0128 11:42:48.975669   44543 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0128 11:42:48.975833   44543 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0128 11:43:28.962744   44543 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0128 11:43:28.964132   44543 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0128 11:43:28.964338   44543 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0128 11:43:33.964812   44543 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0128 11:43:33.965223   44543 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0128 11:43:43.966115   44543 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0128 11:43:43.966280   44543 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0128 11:44:03.967247   44543 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0128 11:44:03.967407   44543 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0128 11:44:43.968381   44543 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0128 11:44:43.968526   44543 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0128 11:44:43.968543   44543 kubeadm.go:322] 
	I0128 11:44:43.968585   44543 kubeadm.go:322] Unfortunately, an error has occurred:
	I0128 11:44:43.968640   44543 kubeadm.go:322] 	timed out waiting for the condition
	I0128 11:44:43.968649   44543 kubeadm.go:322] 
	I0128 11:44:43.968675   44543 kubeadm.go:322] This error is likely caused by:
	I0128 11:44:43.968705   44543 kubeadm.go:322] 	- The kubelet is not running
	I0128 11:44:43.968789   44543 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0128 11:44:43.968795   44543 kubeadm.go:322] 
	I0128 11:44:43.968869   44543 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0128 11:44:43.968905   44543 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0128 11:44:43.968937   44543 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0128 11:44:43.968944   44543 kubeadm.go:322] 
	I0128 11:44:43.969045   44543 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0128 11:44:43.969126   44543 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0128 11:44:43.969199   44543 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0128 11:44:43.969235   44543 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0128 11:44:43.969296   44543 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0128 11:44:43.969323   44543 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0128 11:44:43.972133   44543 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0128 11:44:43.972212   44543 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0128 11:44:43.972318   44543 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.23. Latest validated version: 18.09
	I0128 11:44:43.972401   44543 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0128 11:44:43.972487   44543 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0128 11:44:43.972572   44543 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	W0128 11:44:43.972733   44543 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.23. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.23. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0128 11:44:43.972765   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0128 11:44:44.386961   44543 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0128 11:44:44.397184   44543 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0128 11:44:44.397242   44543 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0128 11:44:44.404787   44543 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0128 11:44:44.404808   44543 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0128 11:44:44.454297   44543 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0128 11:44:44.454347   44543 kubeadm.go:322] [preflight] Running pre-flight checks
	I0128 11:44:44.765732   44543 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0128 11:44:44.765807   44543 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0128 11:44:44.765871   44543 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0128 11:44:44.988444   44543 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0128 11:44:44.989287   44543 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0128 11:44:44.995905   44543 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0128 11:44:45.064719   44543 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0128 11:44:45.086245   44543 out.go:204]   - Generating certificates and keys ...
	I0128 11:44:45.086316   44543 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0128 11:44:45.086376   44543 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0128 11:44:45.086477   44543 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0128 11:44:45.086524   44543 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0128 11:44:45.086582   44543 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0128 11:44:45.086618   44543 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0128 11:44:45.086687   44543 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0128 11:44:45.086737   44543 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0128 11:44:45.086806   44543 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0128 11:44:45.086872   44543 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0128 11:44:45.086906   44543 kubeadm.go:322] [certs] Using the existing "sa" key
	I0128 11:44:45.086947   44543 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0128 11:44:45.154429   44543 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0128 11:44:45.276707   44543 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0128 11:44:45.405556   44543 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0128 11:44:45.590172   44543 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0128 11:44:45.590722   44543 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0128 11:44:45.612150   44543 out.go:204]   - Booting up control plane ...
	I0128 11:44:45.612238   44543 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0128 11:44:45.612320   44543 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0128 11:44:45.612402   44543 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0128 11:44:45.612474   44543 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0128 11:44:45.612619   44543 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0128 11:45:25.601363   44543 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0128 11:45:25.602304   44543 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0128 11:45:25.602557   44543 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0128 11:45:30.604110   44543 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0128 11:45:30.604416   44543 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0128 11:45:40.606062   44543 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0128 11:45:40.606272   44543 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0128 11:46:00.608193   44543 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0128 11:46:00.608357   44543 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0128 11:46:40.610642   44543 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0128 11:46:40.610865   44543 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0128 11:46:40.610890   44543 kubeadm.go:322] 
	I0128 11:46:40.610939   44543 kubeadm.go:322] Unfortunately, an error has occurred:
	I0128 11:46:40.610985   44543 kubeadm.go:322] 	timed out waiting for the condition
	I0128 11:46:40.610998   44543 kubeadm.go:322] 
	I0128 11:46:40.611035   44543 kubeadm.go:322] This error is likely caused by:
	I0128 11:46:40.611068   44543 kubeadm.go:322] 	- The kubelet is not running
	I0128 11:46:40.611196   44543 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0128 11:46:40.611214   44543 kubeadm.go:322] 
	I0128 11:46:40.611326   44543 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0128 11:46:40.611369   44543 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0128 11:46:40.611401   44543 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0128 11:46:40.611408   44543 kubeadm.go:322] 
	I0128 11:46:40.611532   44543 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0128 11:46:40.611612   44543 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0128 11:46:40.611700   44543 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0128 11:46:40.611735   44543 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0128 11:46:40.611794   44543 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0128 11:46:40.611822   44543 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0128 11:46:40.614670   44543 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0128 11:46:40.614732   44543 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0128 11:46:40.614833   44543 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.23. Latest validated version: 18.09
	I0128 11:46:40.614908   44543 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0128 11:46:40.614982   44543 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0128 11:46:40.615043   44543 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0128 11:46:40.615060   44543 kubeadm.go:403] StartCluster complete in 8m4.18836856s
	I0128 11:46:40.615155   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0128 11:46:40.638401   44543 logs.go:279] 0 containers: []
	W0128 11:46:40.638414   44543 logs.go:281] No container was found matching "kube-apiserver"
	I0128 11:46:40.638487   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0128 11:46:40.662066   44543 logs.go:279] 0 containers: []
	W0128 11:46:40.662081   44543 logs.go:281] No container was found matching "etcd"
	I0128 11:46:40.662163   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0128 11:46:40.684921   44543 logs.go:279] 0 containers: []
	W0128 11:46:40.684935   44543 logs.go:281] No container was found matching "coredns"
	I0128 11:46:40.685002   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0128 11:46:40.707757   44543 logs.go:279] 0 containers: []
	W0128 11:46:40.707770   44543 logs.go:281] No container was found matching "kube-scheduler"
	I0128 11:46:40.707838   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0128 11:46:40.732010   44543 logs.go:279] 0 containers: []
	W0128 11:46:40.732024   44543 logs.go:281] No container was found matching "kube-proxy"
	I0128 11:46:40.732097   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0128 11:46:40.756267   44543 logs.go:279] 0 containers: []
	W0128 11:46:40.756281   44543 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0128 11:46:40.756349   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0128 11:46:40.779588   44543 logs.go:279] 0 containers: []
	W0128 11:46:40.779605   44543 logs.go:281] No container was found matching "storage-provisioner"
	I0128 11:46:40.779687   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0128 11:46:40.803900   44543 logs.go:279] 0 containers: []
	W0128 11:46:40.803913   44543 logs.go:281] No container was found matching "kube-controller-manager"
	I0128 11:46:40.803920   44543 logs.go:124] Gathering logs for Docker ...
	I0128 11:46:40.803928   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0128 11:46:40.819641   44543 logs.go:124] Gathering logs for container status ...
	I0128 11:46:40.819654   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0128 11:46:42.870991   44543 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.051318213s)
	I0128 11:46:42.871102   44543 logs.go:124] Gathering logs for kubelet ...
	I0128 11:46:42.871109   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0128 11:46:42.908391   44543 logs.go:124] Gathering logs for dmesg ...
	I0128 11:46:42.908404   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0128 11:46:42.920405   44543 logs.go:124] Gathering logs for describe nodes ...
	I0128 11:46:42.920418   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0128 11:46:42.976791   44543 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W0128 11:46:42.976808   44543 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.23. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0128 11:46:42.976822   44543 out.go:239] * 
	* 
	W0128 11:46:42.976929   44543 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.23. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.23. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0128 11:46:42.976981   44543 out.go:239] * 
	* 
	W0128 11:46:42.977584   44543 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0128 11:46:43.064329   44543 out.go:177] 
	W0128 11:46:43.107261   44543 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.23. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.23. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0128 11:46:43.107329   44543 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0128 11:46:43.107364   44543 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0128 11:46:43.150230   44543 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-amd64 start -p old-k8s-version-182000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-182000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-182000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "617ab90eb0df8d30a74a5dc57ab16039882655d65e3025362322b78aeec379ac",
	        "Created": "2023-01-28T19:32:55.313551858Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 692432,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-01-28T19:38:32.66261959Z",
	            "FinishedAt": "2023-01-28T19:38:29.825307287Z"
	        },
	        "Image": "sha256:01c0ce65fff70ab1f019aa14679c46b23331bd108ae899438e589673efaa9c00",
	        "ResolvConfPath": "/var/lib/docker/containers/617ab90eb0df8d30a74a5dc57ab16039882655d65e3025362322b78aeec379ac/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/617ab90eb0df8d30a74a5dc57ab16039882655d65e3025362322b78aeec379ac/hostname",
	        "HostsPath": "/var/lib/docker/containers/617ab90eb0df8d30a74a5dc57ab16039882655d65e3025362322b78aeec379ac/hosts",
	        "LogPath": "/var/lib/docker/containers/617ab90eb0df8d30a74a5dc57ab16039882655d65e3025362322b78aeec379ac/617ab90eb0df8d30a74a5dc57ab16039882655d65e3025362322b78aeec379ac-json.log",
	        "Name": "/old-k8s-version-182000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-182000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-182000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/cf358ab04dbdc1e02cf2f4479b1d8fec29dc7884849cab2c09927d672777c18a-init/diff:/var/lib/docker/overlay2/ebc03c916d1215717cc928cc2ae6bb5febcaf1787682b19b31688cb58ea354df/diff:/var/lib/docker/overlay2/aaa47387c6297b9482eaf2d8291628b9713643f21d066c37435b7e2cb9493e2a/diff:/var/lib/docker/overlay2/f4b2c82f60338b3f859441322400906b78ab112321f53e01c52ec81f29b4b492/diff:/var/lib/docker/overlay2/9425b655d46ca09e43b6484556a0c42b69e0c7947e14ec530546a61f36d3b950/diff:/var/lib/docker/overlay2/7d54571f62200ad4404fb9bb52649136f53eb6d6eedc5a51b22898df9001c1d4/diff:/var/lib/docker/overlay2/a4b4864baac235070d93e0940d897dd3006e6a93d705490108451f8d00ba148f/diff:/var/lib/docker/overlay2/8b092a30ffaf1c9230cef4864afb85d91ceb9fa92e484e3ebf7a31bb7df915bc/diff:/var/lib/docker/overlay2/96ac23e2e494a92e2287115c1a85e160e67543832baaaa3fa9a2351b370d5bd4/diff:/var/lib/docker/overlay2/c1e68f2d6c4ce95b33833a8d750a79aeaef16cc7d0a556369a63014eef7597b6/diff:/var/lib/docker/overlay2/89b3fe
fdd4bd8243826ccca31dec1aef9f91ad82adda108147b89c096792dfa5/diff:/var/lib/docker/overlay2/0b09be009751a25e4cbe64835151f1a814c4547d2c513994ae82f8093a22040d/diff:/var/lib/docker/overlay2/dc9a2b1667d67c8f0269966ef8862a4ffcfe4b68ad45f12e3ff27075c595c716/diff:/var/lib/docker/overlay2/d41ab03c6154f92111515bffc37c1d75570fa697ffa380631216096b52bfbc1b/diff:/var/lib/docker/overlay2/549b2cfc0a7d4f81f8d2624b1b2069b66d159ecd7b38148b476bb7a1b9e29100/diff:/var/lib/docker/overlay2/ecd7a1e2ce66c77afcf87a94383f14763eca5c8732c76b1b83765a278db91228/diff:/var/lib/docker/overlay2/6361f06734d312adc4271443765c435c4a7600356d1c6597fb7fa440cf1a2eb4/diff:/var/lib/docker/overlay2/cc7751a853d09ad130dccc1c835daa64e6ba830331636aca6a2a98da95ab52c1/diff:/var/lib/docker/overlay2/6612588f68e64e123a6e5cf6f6da339ee6072f8054f936be6d4f799d6c683e75/diff:/var/lib/docker/overlay2/673e42d3b5998d60bbb5c7c40da29902c3ea35068701966a7e3fd8a923d4a37a/diff:/var/lib/docker/overlay2/115d8a9e167d9b574c1d945d85d46da3ad2688595502524702976fc9b1051464/diff:/var/lib/d
ocker/overlay2/a8a2380c37eec6348eac27c7ee660b1f1d1ef94786cd68f197218066d99d80dd/diff:/var/lib/docker/overlay2/9261c5669bb687df6f9ad1ac00615cdf03b913ab9b3e1ca1a1f1cb6420702325/diff:/var/lib/docker/overlay2/46213bfa914da7941cec1c2c32185400a83c35a74274f39d74ad203ee5688535/diff:/var/lib/docker/overlay2/45ce48252aa0eeb54f2a1c27e570f8e85ac4a1d28a947b81618e608c64e3a700/diff:/var/lib/docker/overlay2/5631fae0fb00254444e3cc059b8b6062ee02fd66eefdf043970883f6724ce682/diff:/var/lib/docker/overlay2/e23ece345ff4dee7248a8e8cbd15cdbaef319d286a6490377fc337feecd6be04/diff:/var/lib/docker/overlay2/004bedb9de21965ae003d62b64a9e6506a10afa328b9af469eb51d3920d9c3b6/diff:/var/lib/docker/overlay2/c0ed692b610507b4315c2a43e64bd682bfdae35a7b4bcba499bba9cfb33121c4/diff:/var/lib/docker/overlay2/8396057830d1ed01256a5ee803b6310c8bf4c6ef3fb0f958240557352a12f3db/diff:/var/lib/docker/overlay2/c8024a29733fe87d5aad124df5ff33e97bcca94ee9fee196a6d51c9474692733/diff:/var/lib/docker/overlay2/9e59b455e481cdabd17790daddef6872e7b6452d1e8de1526998d92ab5f
c008f/diff:/var/lib/docker/overlay2/88cc3ecb1b979acbac3227fd30f3e879629eff2b47f416b3069463900f3e40e0/diff:/var/lib/docker/overlay2/5ef1713ef4e296c4637ccd2823c2b80cb5c53cd757947ff3fc17b7dd2d2dd21c/diff:/var/lib/docker/overlay2/17a697eb9c335b2a20567e3615e2222a113542532402dc62978ff64d65860c5e/diff:/var/lib/docker/overlay2/69e01a154090c42cbf63b88c7e922d483dd2d393fbab64725f79b3ff3800c3c1/diff:/var/lib/docker/overlay2/6ed77ee7b45230567431b0cbfb9cefedfd3f3d7eecf271f20a711bbcc4fdb1b3/diff:/var/lib/docker/overlay2/3bf095c6d6fe582e91d9a9ab0dc5b4d168f93f28ec2488a88f60b63ebf1e22f7/diff:/var/lib/docker/overlay2/cfc3bbbdc2702c8d23d146885b4da1a4482e8af461b5c87426fab855f97417a0/diff:/var/lib/docker/overlay2/1c4944ff8930ced790954d78530aeaf94eeb6c7367b474bdfbad30345cc1276a/diff:/var/lib/docker/overlay2/44cf435555d16eb68c4149bc53e4ae11797c7ddb429332f3d0d36328cb16ea5f/diff:/var/lib/docker/overlay2/4a7b4287594c4da981df984cd6e3910778bfdff2b5560a03d6cdcb589790c8e5/diff:/var/lib/docker/overlay2/76c287aa1bd3a7c3636e82df1bac8ead485e55
7a0fd68fdbfc0d5655d89f7113/diff:/var/lib/docker/overlay2/a2ab65056651b30980d6df9664f682519df2c2fc604d87ddb2bb2ca25b663d5e/diff:/var/lib/docker/overlay2/3a84daa5ad43dd7c27d884672613e37b8a5bed1fa79edee0e951b2e3fa39f21f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cf358ab04dbdc1e02cf2f4479b1d8fec29dc7884849cab2c09927d672777c18a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cf358ab04dbdc1e02cf2f4479b1d8fec29dc7884849cab2c09927d672777c18a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cf358ab04dbdc1e02cf2f4479b1d8fec29dc7884849cab2c09927d672777c18a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-182000",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-182000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-182000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-182000",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-182000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a866b115da76d1500e5c6ec1c87955e1bc3fb30a0609eeb66b3f8fe1f7fa2c1a",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62979"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62980"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62981"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62982"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62983"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/a866b115da76",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-182000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "617ab90eb0df",
	                        "old-k8s-version-182000"
	                    ],
	                    "NetworkID": "56bfdf73bec9b0196848fd6c701661b6f09d89a5213236097da597daf246c910",
	                    "EndpointID": "ded8251749a3d30dcda48b4492f2a9fb69f5ae5dd7d576b06c81313cb7eb59b8",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-182000 -n old-k8s-version-182000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-182000 -n old-k8s-version-182000: exit status 2 (412.179626ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p old-k8s-version-182000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p old-k8s-version-182000 logs -n 25: (3.489035414s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|------------------------|---------|--------------------------|---------------------|---------------------|
	| Command |                       Args                        |        Profile         |  User   |         Version          |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|------------------------|---------|--------------------------|---------------------|---------------------|
	| ssh     | -p calico-732000 sudo                             | calico-732000          | jenkins | v1.29.0-1674856271-15565 | 28 Jan 23 11:34 PST | 28 Jan 23 11:34 PST |
	|         | containerd config dump                            |                        |         |                          |                     |                     |
	| ssh     | -p calico-732000 sudo                             | calico-732000          | jenkins | v1.29.0-1674856271-15565 | 28 Jan 23 11:34 PST |                     |
	|         | systemctl status crio --all                       |                        |         |                          |                     |                     |
	|         | --full --no-pager                                 |                        |         |                          |                     |                     |
	| ssh     | -p calico-732000 sudo                             | calico-732000          | jenkins | v1.29.0-1674856271-15565 | 28 Jan 23 11:34 PST | 28 Jan 23 11:34 PST |
	|         | systemctl cat crio --no-pager                     |                        |         |                          |                     |                     |
	| ssh     | -p calico-732000 sudo find                        | calico-732000          | jenkins | v1.29.0-1674856271-15565 | 28 Jan 23 11:34 PST | 28 Jan 23 11:34 PST |
	|         | /etc/crio -type f -exec sh -c                     |                        |         |                          |                     |                     |
	|         | 'echo {}; cat {}' \;                              |                        |         |                          |                     |                     |
	| ssh     | -p calico-732000 sudo crio                        | calico-732000          | jenkins | v1.29.0-1674856271-15565 | 28 Jan 23 11:34 PST | 28 Jan 23 11:34 PST |
	|         | config                                            |                        |         |                          |                     |                     |
	| delete  | -p calico-732000                                  | calico-732000          | jenkins | v1.29.0-1674856271-15565 | 28 Jan 23 11:34 PST | 28 Jan 23 11:34 PST |
	| start   | -p no-preload-337000                              | no-preload-337000      | jenkins | v1.29.0-1674856271-15565 | 28 Jan 23 11:34 PST | 28 Jan 23 11:35 PST |
	|         | --memory=2200                                     |                        |         |                          |                     |                     |
	|         | --alsologtostderr                                 |                        |         |                          |                     |                     |
	|         | --wait=true --preload=false                       |                        |         |                          |                     |                     |
	|         | --driver=docker                                   |                        |         |                          |                     |                     |
	|         | --kubernetes-version=v1.26.1                      |                        |         |                          |                     |                     |
	| addons  | enable metrics-server -p no-preload-337000        | no-preload-337000      | jenkins | v1.29.0-1674856271-15565 | 28 Jan 23 11:35 PST | 28 Jan 23 11:35 PST |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                        |         |                          |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                        |         |                          |                     |                     |
	| stop    | -p no-preload-337000                              | no-preload-337000      | jenkins | v1.29.0-1674856271-15565 | 28 Jan 23 11:35 PST | 28 Jan 23 11:35 PST |
	|         | --alsologtostderr -v=3                            |                        |         |                          |                     |                     |
	| addons  | enable dashboard -p no-preload-337000             | no-preload-337000      | jenkins | v1.29.0-1674856271-15565 | 28 Jan 23 11:35 PST | 28 Jan 23 11:35 PST |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                        |         |                          |                     |                     |
	| start   | -p no-preload-337000                              | no-preload-337000      | jenkins | v1.29.0-1674856271-15565 | 28 Jan 23 11:35 PST | 28 Jan 23 11:40 PST |
	|         | --memory=2200                                     |                        |         |                          |                     |                     |
	|         | --alsologtostderr                                 |                        |         |                          |                     |                     |
	|         | --wait=true --preload=false                       |                        |         |                          |                     |                     |
	|         | --driver=docker                                   |                        |         |                          |                     |                     |
	|         | --kubernetes-version=v1.26.1                      |                        |         |                          |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-182000   | old-k8s-version-182000 | jenkins | v1.29.0-1674856271-15565 | 28 Jan 23 11:36 PST |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                        |         |                          |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                        |         |                          |                     |                     |
	| stop    | -p old-k8s-version-182000                         | old-k8s-version-182000 | jenkins | v1.29.0-1674856271-15565 | 28 Jan 23 11:38 PST | 28 Jan 23 11:38 PST |
	|         | --alsologtostderr -v=3                            |                        |         |                          |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-182000        | old-k8s-version-182000 | jenkins | v1.29.0-1674856271-15565 | 28 Jan 23 11:38 PST | 28 Jan 23 11:38 PST |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                        |         |                          |                     |                     |
	| start   | -p old-k8s-version-182000                         | old-k8s-version-182000 | jenkins | v1.29.0-1674856271-15565 | 28 Jan 23 11:38 PST |                     |
	|         | --memory=2200                                     |                        |         |                          |                     |                     |
	|         | --alsologtostderr --wait=true                     |                        |         |                          |                     |                     |
	|         | --kvm-network=default                             |                        |         |                          |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                     |                        |         |                          |                     |                     |
	|         | --disable-driver-mounts                           |                        |         |                          |                     |                     |
	|         | --keep-context=false                              |                        |         |                          |                     |                     |
	|         | --driver=docker                                   |                        |         |                          |                     |                     |
	|         | --kubernetes-version=v1.16.0                      |                        |         |                          |                     |                     |
	| ssh     | -p no-preload-337000 sudo                         | no-preload-337000      | jenkins | v1.29.0-1674856271-15565 | 28 Jan 23 11:41 PST | 28 Jan 23 11:41 PST |
	|         | crictl images -o json                             |                        |         |                          |                     |                     |
	| pause   | -p no-preload-337000                              | no-preload-337000      | jenkins | v1.29.0-1674856271-15565 | 28 Jan 23 11:41 PST | 28 Jan 23 11:41 PST |
	|         | --alsologtostderr -v=1                            |                        |         |                          |                     |                     |
	| unpause | -p no-preload-337000                              | no-preload-337000      | jenkins | v1.29.0-1674856271-15565 | 28 Jan 23 11:41 PST | 28 Jan 23 11:41 PST |
	|         | --alsologtostderr -v=1                            |                        |         |                          |                     |                     |
	| delete  | -p no-preload-337000                              | no-preload-337000      | jenkins | v1.29.0-1674856271-15565 | 28 Jan 23 11:41 PST | 28 Jan 23 11:41 PST |
	| delete  | -p no-preload-337000                              | no-preload-337000      | jenkins | v1.29.0-1674856271-15565 | 28 Jan 23 11:41 PST | 28 Jan 23 11:41 PST |
	| start   | -p embed-certs-384000                             | embed-certs-384000     | jenkins | v1.29.0-1674856271-15565 | 28 Jan 23 11:41 PST | 28 Jan 23 11:42 PST |
	|         | --memory=2200                                     |                        |         |                          |                     |                     |
	|         | --alsologtostderr --wait=true                     |                        |         |                          |                     |                     |
	|         | --embed-certs --driver=docker                     |                        |         |                          |                     |                     |
	|         | --kubernetes-version=v1.26.1                      |                        |         |                          |                     |                     |
	| addons  | enable metrics-server -p embed-certs-384000       | embed-certs-384000     | jenkins | v1.29.0-1674856271-15565 | 28 Jan 23 11:42 PST | 28 Jan 23 11:42 PST |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                        |         |                          |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                        |         |                          |                     |                     |
	| stop    | -p embed-certs-384000                             | embed-certs-384000     | jenkins | v1.29.0-1674856271-15565 | 28 Jan 23 11:42 PST | 28 Jan 23 11:42 PST |
	|         | --alsologtostderr -v=3                            |                        |         |                          |                     |                     |
	| addons  | enable dashboard -p embed-certs-384000            | embed-certs-384000     | jenkins | v1.29.0-1674856271-15565 | 28 Jan 23 11:42 PST | 28 Jan 23 11:42 PST |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                        |         |                          |                     |                     |
	| start   | -p embed-certs-384000                             | embed-certs-384000     | jenkins | v1.29.0-1674856271-15565 | 28 Jan 23 11:42 PST |                     |
	|         | --memory=2200                                     |                        |         |                          |                     |                     |
	|         | --alsologtostderr --wait=true                     |                        |         |                          |                     |                     |
	|         | --embed-certs --driver=docker                     |                        |         |                          |                     |                     |
	|         | --kubernetes-version=v1.26.1                      |                        |         |                          |                     |                     |
	|---------|---------------------------------------------------|------------------------|---------|--------------------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/01/28 11:42:36
	Running on machine: MacOS-Agent-3
	Binary: Built with gc go1.19.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0128 11:42:36.958814   45138 out.go:296] Setting OutFile to fd 1 ...
	I0128 11:42:36.958971   45138 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0128 11:42:36.958976   45138 out.go:309] Setting ErrFile to fd 2...
	I0128 11:42:36.958980   45138 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0128 11:42:36.959109   45138 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15565-24808/.minikube/bin
	I0128 11:42:36.959596   45138 out.go:303] Setting JSON to false
	I0128 11:42:36.977930   45138 start.go:125] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":9731,"bootTime":1674925225,"procs":389,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0128 11:42:36.978015   45138 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0128 11:42:37.000190   45138 out.go:177] * [embed-certs-384000] minikube v1.29.0-1674856271-15565 on Darwin 13.2
	I0128 11:42:37.044172   45138 notify.go:220] Checking for updates...
	I0128 11:42:37.065597   45138 out.go:177]   - MINIKUBE_LOCATION=15565
	I0128 11:42:37.086940   45138 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15565-24808/kubeconfig
	I0128 11:42:37.108110   45138 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0128 11:42:37.129858   45138 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0128 11:42:37.151035   45138 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-24808/.minikube
	I0128 11:42:37.173173   45138 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0128 11:42:37.195477   45138 config.go:180] Loaded profile config "embed-certs-384000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0128 11:42:37.196189   45138 driver.go:365] Setting default libvirt URI to qemu:///system
	I0128 11:42:37.257180   45138 docker.go:141] docker version: linux-20.10.22:Docker Desktop 4.16.2 (95914)
	I0128 11:42:37.257306   45138 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0128 11:42:37.398836   45138 info.go:266] docker info: {ID:O4T2:OPZV:SRVN:GAA5:IEOW:TFNR:UPTT:2CKV:GY3E:3NIN:VS7C:3BCB Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:64 OomKillDisable:false NGoroutines:57 SystemTime:2023-01-28 19:42:37.306639584 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0128 11:42:37.441386   45138 out.go:177] * Using the docker driver based on existing profile
	I0128 11:42:37.462272   45138 start.go:296] selected driver: docker
	I0128 11:42:37.462291   45138 start.go:857] validating driver "docker" against &{Name:embed-certs-384000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:embed-certs-384000 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0128 11:42:37.462380   45138 start.go:868] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0128 11:42:37.465090   45138 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0128 11:42:37.616234   45138 info.go:266] docker info: {ID:O4T2:OPZV:SRVN:GAA5:IEOW:TFNR:UPTT:2CKV:GY3E:3NIN:VS7C:3BCB Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:64 OomKillDisable:false NGoroutines:57 SystemTime:2023-01-28 19:42:37.519980284 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0128 11:42:37.616400   45138 start_flags.go:917] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0128 11:42:37.616420   45138 cni.go:84] Creating CNI manager for ""
	I0128 11:42:37.616431   45138 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0128 11:42:37.616443   45138 start_flags.go:319] config:
	{Name:embed-certs-384000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:embed-certs-384000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: Netwo
rkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mo
untOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0128 11:42:37.658793   45138 out.go:177] * Starting control plane node embed-certs-384000 in cluster embed-certs-384000
	I0128 11:42:37.679999   45138 cache.go:120] Beginning downloading kic base image for docker with docker
	I0128 11:42:37.701015   45138 out.go:177] * Pulling base image ...
	I0128 11:42:37.742963   45138 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0128 11:42:37.743001   45138 image.go:77] Checking for gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 in local docker daemon
	I0128 11:42:37.743033   45138 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15565-24808/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4
	I0128 11:42:37.743045   45138 cache.go:57] Caching tarball of preloaded images
	I0128 11:42:37.743156   45138 preload.go:174] Found /Users/jenkins/minikube-integration/15565-24808/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0128 11:42:37.743170   45138 cache.go:60] Finished verifying existence of preloaded tar for  v1.26.1 on docker
	I0128 11:42:37.743695   45138 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/embed-certs-384000/config.json ...
	I0128 11:42:37.799702   45138 image.go:81] Found gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 in local docker daemon, skipping pull
	I0128 11:42:37.799726   45138 cache.go:143] gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 exists in daemon, skipping load
	I0128 11:42:37.799751   45138 cache.go:193] Successfully downloaded all kic artifacts
	I0128 11:42:37.799800   45138 start.go:364] acquiring machines lock for embed-certs-384000: {Name:mk52b58770b089fe99b2e9a4e47d2aa608aa8ed7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0128 11:42:37.799885   45138 start.go:368] acquired machines lock for "embed-certs-384000" in 67.239µs
	I0128 11:42:37.799908   45138 start.go:96] Skipping create...Using existing machine configuration
	I0128 11:42:37.799917   45138 fix.go:55] fixHost starting: 
	I0128 11:42:37.800152   45138 cli_runner.go:164] Run: docker container inspect embed-certs-384000 --format={{.State.Status}}
	I0128 11:42:37.856851   45138 fix.go:103] recreateIfNeeded on embed-certs-384000: state=Stopped err=<nil>
	W0128 11:42:37.856879   45138 fix.go:129] unexpected machine state, will restart: <nil>
	I0128 11:42:37.900453   45138 out.go:177] * Restarting existing docker container for "embed-certs-384000" ...
	I0128 11:42:37.242394   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:42:37.381738   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0128 11:42:37.406482   44543 logs.go:279] 0 containers: []
	W0128 11:42:37.406493   44543 logs.go:281] No container was found matching "kube-apiserver"
	I0128 11:42:37.406563   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0128 11:42:37.433294   44543 logs.go:279] 0 containers: []
	W0128 11:42:37.433314   44543 logs.go:281] No container was found matching "etcd"
	I0128 11:42:37.433388   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0128 11:42:37.457625   44543 logs.go:279] 0 containers: []
	W0128 11:42:37.457641   44543 logs.go:281] No container was found matching "coredns"
	I0128 11:42:37.457710   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0128 11:42:37.518144   44543 logs.go:279] 0 containers: []
	W0128 11:42:37.518158   44543 logs.go:281] No container was found matching "kube-scheduler"
	I0128 11:42:37.518229   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0128 11:42:37.545851   44543 logs.go:279] 0 containers: []
	W0128 11:42:37.545864   44543 logs.go:281] No container was found matching "kube-proxy"
	I0128 11:42:37.545931   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0128 11:42:37.570182   44543 logs.go:279] 0 containers: []
	W0128 11:42:37.570195   44543 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0128 11:42:37.570267   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0128 11:42:37.597017   44543 logs.go:279] 0 containers: []
	W0128 11:42:37.597030   44543 logs.go:281] No container was found matching "storage-provisioner"
	I0128 11:42:37.597100   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0128 11:42:37.622857   44543 logs.go:279] 0 containers: []
	W0128 11:42:37.622870   44543 logs.go:281] No container was found matching "kube-controller-manager"
	I0128 11:42:37.622877   44543 logs.go:124] Gathering logs for kubelet ...
	I0128 11:42:37.622883   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0128 11:42:37.661481   44543 logs.go:124] Gathering logs for dmesg ...
	I0128 11:42:37.661493   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0128 11:42:37.673981   44543 logs.go:124] Gathering logs for describe nodes ...
	I0128 11:42:37.673993   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0128 11:42:37.729193   44543 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0128 11:42:37.729204   44543 logs.go:124] Gathering logs for Docker ...
	I0128 11:42:37.729210   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0128 11:42:37.744548   44543 logs.go:124] Gathering logs for container status ...
	I0128 11:42:37.744559   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0128 11:42:39.795448   44543 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.050870724s)
	I0128 11:42:37.921810   45138 cli_runner.go:164] Run: docker start embed-certs-384000
	I0128 11:42:38.260881   45138 cli_runner.go:164] Run: docker container inspect embed-certs-384000 --format={{.State.Status}}
	I0128 11:42:38.322131   45138 kic.go:426] container "embed-certs-384000" state is running.
	I0128 11:42:38.322797   45138 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-384000
	I0128 11:42:38.388528   45138 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/embed-certs-384000/config.json ...
	I0128 11:42:38.389009   45138 machine.go:88] provisioning docker machine ...
	I0128 11:42:38.389034   45138 ubuntu.go:169] provisioning hostname "embed-certs-384000"
	I0128 11:42:38.389108   45138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-384000
	I0128 11:42:38.458197   45138 main.go:141] libmachine: Using SSH client type: native
	I0128 11:42:38.458434   45138 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 63150 <nil> <nil>}
	I0128 11:42:38.458452   45138 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-384000 && echo "embed-certs-384000" | sudo tee /etc/hostname
	I0128 11:42:38.609139   45138 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-384000
	
	I0128 11:42:38.609241   45138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-384000
	I0128 11:42:38.669752   45138 main.go:141] libmachine: Using SSH client type: native
	I0128 11:42:38.669916   45138 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 63150 <nil> <nil>}
	I0128 11:42:38.669931   45138 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-384000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-384000/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-384000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0128 11:42:38.805009   45138 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0128 11:42:38.805032   45138 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15565-24808/.minikube CaCertPath:/Users/jenkins/minikube-integration/15565-24808/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15565-24808/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15565-24808/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15565-24808/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15565-24808/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15565-24808/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15565-24808/.minikube}
	I0128 11:42:38.805050   45138 ubuntu.go:177] setting up certificates
	I0128 11:42:38.805058   45138 provision.go:83] configureAuth start
	I0128 11:42:38.805136   45138 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-384000
	I0128 11:42:38.861828   45138 provision.go:138] copyHostCerts
	I0128 11:42:38.861927   45138 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-24808/.minikube/cert.pem, removing ...
	I0128 11:42:38.861936   45138 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-24808/.minikube/cert.pem
	I0128 11:42:38.862036   45138 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-24808/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15565-24808/.minikube/cert.pem (1123 bytes)
	I0128 11:42:38.862244   45138 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-24808/.minikube/key.pem, removing ...
	I0128 11:42:38.862252   45138 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-24808/.minikube/key.pem
	I0128 11:42:38.862313   45138 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-24808/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15565-24808/.minikube/key.pem (1675 bytes)
	I0128 11:42:38.862466   45138 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-24808/.minikube/ca.pem, removing ...
	I0128 11:42:38.862472   45138 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-24808/.minikube/ca.pem
	I0128 11:42:38.862529   45138 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-24808/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15565-24808/.minikube/ca.pem (1082 bytes)
	I0128 11:42:38.862661   45138 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15565-24808/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15565-24808/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15565-24808/.minikube/certs/ca-key.pem org=jenkins.embed-certs-384000 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube embed-certs-384000]
	I0128 11:42:39.014066   45138 provision.go:172] copyRemoteCerts
	I0128 11:42:39.014133   45138 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0128 11:42:39.014196   45138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-384000
	I0128 11:42:39.071031   45138 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63150 SSHKeyPath:/Users/jenkins/minikube-integration/15565-24808/.minikube/machines/embed-certs-384000/id_rsa Username:docker}
	I0128 11:42:39.164489   45138 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-24808/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0128 11:42:39.181843   45138 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-24808/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0128 11:42:39.199206   45138 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-24808/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0128 11:42:39.216439   45138 provision.go:86] duration metric: configureAuth took 411.367564ms
	I0128 11:42:39.216454   45138 ubuntu.go:193] setting minikube options for container-runtime
	I0128 11:42:39.216622   45138 config.go:180] Loaded profile config "embed-certs-384000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0128 11:42:39.216683   45138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-384000
	I0128 11:42:39.274141   45138 main.go:141] libmachine: Using SSH client type: native
	I0128 11:42:39.274291   45138 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 63150 <nil> <nil>}
	I0128 11:42:39.274301   45138 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0128 11:42:39.408364   45138 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0128 11:42:39.408384   45138 ubuntu.go:71] root file system type: overlay
	I0128 11:42:39.408570   45138 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0128 11:42:39.408665   45138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-384000
	I0128 11:42:39.466118   45138 main.go:141] libmachine: Using SSH client type: native
	I0128 11:42:39.466277   45138 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 63150 <nil> <nil>}
	I0128 11:42:39.466333   45138 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0128 11:42:39.610241   45138 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0128 11:42:39.610346   45138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-384000
	I0128 11:42:39.668133   45138 main.go:141] libmachine: Using SSH client type: native
	I0128 11:42:39.668302   45138 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 63150 <nil> <nil>}
	I0128 11:42:39.668319   45138 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0128 11:42:39.802563   45138 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0128 11:42:39.802584   45138 machine.go:91] provisioned docker machine in 1.413556466s
	I0128 11:42:39.802591   45138 start.go:300] post-start starting for "embed-certs-384000" (driver="docker")
	I0128 11:42:39.802598   45138 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0128 11:42:39.802688   45138 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0128 11:42:39.802746   45138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-384000
	I0128 11:42:39.861739   45138 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63150 SSHKeyPath:/Users/jenkins/minikube-integration/15565-24808/.minikube/machines/embed-certs-384000/id_rsa Username:docker}
	I0128 11:42:39.956746   45138 ssh_runner.go:195] Run: cat /etc/os-release
	I0128 11:42:39.960342   45138 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0128 11:42:39.960358   45138 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0128 11:42:39.960368   45138 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0128 11:42:39.960376   45138 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0128 11:42:39.960384   45138 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15565-24808/.minikube/addons for local assets ...
	I0128 11:42:39.960472   45138 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15565-24808/.minikube/files for local assets ...
	I0128 11:42:39.960621   45138 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15565-24808/.minikube/files/etc/ssl/certs/259822.pem -> 259822.pem in /etc/ssl/certs
	I0128 11:42:39.960788   45138 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0128 11:42:39.967989   45138 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-24808/.minikube/files/etc/ssl/certs/259822.pem --> /etc/ssl/certs/259822.pem (1708 bytes)
	I0128 11:42:39.985138   45138 start.go:303] post-start completed in 182.528338ms
	I0128 11:42:39.985234   45138 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0128 11:42:39.985301   45138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-384000
	I0128 11:42:40.043579   45138 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63150 SSHKeyPath:/Users/jenkins/minikube-integration/15565-24808/.minikube/machines/embed-certs-384000/id_rsa Username:docker}
	I0128 11:42:40.134610   45138 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0128 11:42:40.139289   45138 fix.go:57] fixHost completed within 2.339366207s
	I0128 11:42:40.139301   45138 start.go:83] releasing machines lock for "embed-certs-384000", held for 2.339402373s
	I0128 11:42:40.139396   45138 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-384000
	I0128 11:42:40.196249   45138 ssh_runner.go:195] Run: cat /version.json
	I0128 11:42:40.196268   45138 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0128 11:42:40.196319   45138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-384000
	I0128 11:42:40.196331   45138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-384000
	I0128 11:42:40.256590   45138 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63150 SSHKeyPath:/Users/jenkins/minikube-integration/15565-24808/.minikube/machines/embed-certs-384000/id_rsa Username:docker}
	I0128 11:42:40.256768   45138 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63150 SSHKeyPath:/Users/jenkins/minikube-integration/15565-24808/.minikube/machines/embed-certs-384000/id_rsa Username:docker}
	W0128 11:42:40.347866   45138 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.29.0 -> Actual minikube version: v1.29.0-1674856271-15565
	I0128 11:42:40.347959   45138 ssh_runner.go:195] Run: systemctl --version
	I0128 11:42:40.411505   45138 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0128 11:42:40.416538   45138 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0128 11:42:40.432804   45138 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0128 11:42:40.432954   45138 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0128 11:42:40.441326   45138 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
	I0128 11:42:40.454721   45138 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0128 11:42:40.463290   45138 cni.go:258] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0128 11:42:40.463312   45138 start.go:483] detecting cgroup driver to use...
	I0128 11:42:40.463327   45138 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0128 11:42:40.463433   45138 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0128 11:42:40.477617   45138 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0128 11:42:40.487296   45138 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0128 11:42:40.497077   45138 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0128 11:42:40.497171   45138 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0128 11:42:40.508115   45138 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0128 11:42:40.518062   45138 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0128 11:42:40.527761   45138 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0128 11:42:40.537118   45138 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0128 11:42:40.545808   45138 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0128 11:42:40.555271   45138 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0128 11:42:40.563504   45138 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0128 11:42:40.571500   45138 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0128 11:42:40.634432   45138 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0128 11:42:40.703408   45138 start.go:483] detecting cgroup driver to use...
	I0128 11:42:40.703428   45138 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0128 11:42:40.703491   45138 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0128 11:42:40.714480   45138 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0128 11:42:40.714553   45138 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0128 11:42:40.726143   45138 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0128 11:42:40.742283   45138 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0128 11:42:40.843278   45138 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0128 11:42:40.945557   45138 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0128 11:42:40.945576   45138 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0128 11:42:40.959406   45138 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0128 11:42:41.049314   45138 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0128 11:42:41.345983   45138 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0128 11:42:41.411451   45138 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0128 11:42:41.479539   45138 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0128 11:42:41.549641   45138 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0128 11:42:41.617736   45138 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0128 11:42:41.639695   45138 start.go:530] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0128 11:42:41.639776   45138 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0128 11:42:41.644110   45138 start.go:551] Will wait 60s for crictl version
	I0128 11:42:41.644154   45138 ssh_runner.go:195] Run: which crictl
	I0128 11:42:41.647806   45138 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0128 11:42:41.763480   45138 start.go:567] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.23
	RuntimeApiVersion:  v1alpha2
	I0128 11:42:41.763561   45138 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0128 11:42:41.791653   45138 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0128 11:42:41.841675   45138 out.go:204] * Preparing Kubernetes v1.26.1 on Docker 20.10.23 ...
	I0128 11:42:41.841813   45138 cli_runner.go:164] Run: docker exec -t embed-certs-384000 dig +short host.docker.internal
	I0128 11:42:41.947105   45138 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0128 11:42:41.947214   45138 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0128 11:42:41.951897   45138 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0128 11:42:42.295696   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:42:42.381841   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0128 11:42:42.405737   44543 logs.go:279] 0 containers: []
	W0128 11:42:42.405749   44543 logs.go:281] No container was found matching "kube-apiserver"
	I0128 11:42:42.405820   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0128 11:42:42.430800   44543 logs.go:279] 0 containers: []
	W0128 11:42:42.430812   44543 logs.go:281] No container was found matching "etcd"
	I0128 11:42:42.430877   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0128 11:42:42.456163   44543 logs.go:279] 0 containers: []
	W0128 11:42:42.456176   44543 logs.go:281] No container was found matching "coredns"
	I0128 11:42:42.456260   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0128 11:42:42.482324   44543 logs.go:279] 0 containers: []
	W0128 11:42:42.482339   44543 logs.go:281] No container was found matching "kube-scheduler"
	I0128 11:42:42.482411   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0128 11:42:42.506786   44543 logs.go:279] 0 containers: []
	W0128 11:42:42.506801   44543 logs.go:281] No container was found matching "kube-proxy"
	I0128 11:42:42.506873   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0128 11:42:42.532846   44543 logs.go:279] 0 containers: []
	W0128 11:42:42.532862   44543 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0128 11:42:42.532930   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0128 11:42:42.557080   44543 logs.go:279] 0 containers: []
	W0128 11:42:42.557095   44543 logs.go:281] No container was found matching "storage-provisioner"
	I0128 11:42:42.557165   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0128 11:42:42.584597   44543 logs.go:279] 0 containers: []
	W0128 11:42:42.584629   44543 logs.go:281] No container was found matching "kube-controller-manager"
	I0128 11:42:42.584641   44543 logs.go:124] Gathering logs for kubelet ...
	I0128 11:42:42.584653   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0128 11:42:42.627541   44543 logs.go:124] Gathering logs for dmesg ...
	I0128 11:42:42.627560   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0128 11:42:42.642819   44543 logs.go:124] Gathering logs for describe nodes ...
	I0128 11:42:42.642833   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0128 11:42:42.702889   44543 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0128 11:42:42.702903   44543 logs.go:124] Gathering logs for Docker ...
	I0128 11:42:42.702910   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0128 11:42:42.720162   44543 logs.go:124] Gathering logs for container status ...
	I0128 11:42:42.720176   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0128 11:42:44.772006   44543 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.051811665s)
	I0128 11:42:41.962620   45138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-384000
	I0128 11:42:42.040266   45138 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0128 11:42:42.040339   45138 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0128 11:42:42.065471   45138 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	registry.k8s.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0128 11:42:42.065487   45138 docker.go:560] Images already preloaded, skipping extraction
	I0128 11:42:42.065566   45138 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0128 11:42:42.091220   45138 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	registry.k8s.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0128 11:42:42.091247   45138 cache_images.go:84] Images are preloaded, skipping loading
	I0128 11:42:42.091354   45138 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0128 11:42:42.160757   45138 cni.go:84] Creating CNI manager for ""
	I0128 11:42:42.160775   45138 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0128 11:42:42.160792   45138 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0128 11:42:42.160809   45138 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.26.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-384000 NodeName:embed-certs-384000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0128 11:42:42.160938   45138 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "embed-certs-384000"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.26.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0128 11:42:42.161024   45138 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.26.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=embed-certs-384000 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.26.1 ClusterName:embed-certs-384000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0128 11:42:42.161090   45138 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.1
	I0128 11:42:42.169312   45138 binaries.go:44] Found k8s binaries, skipping transfer
	I0128 11:42:42.169378   45138 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0128 11:42:42.176781   45138 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (450 bytes)
	I0128 11:42:42.189868   45138 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0128 11:42:42.202849   45138 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2094 bytes)
	I0128 11:42:42.216059   45138 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0128 11:42:42.220168   45138 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0128 11:42:42.230178   45138 certs.go:56] Setting up /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/embed-certs-384000 for IP: 192.168.67.2
	I0128 11:42:42.230195   45138 certs.go:186] acquiring lock for shared ca certs: {Name:mk223e4eab41546e140aa3e3e480564c04fddab4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0128 11:42:42.230362   45138 certs.go:195] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15565-24808/.minikube/ca.key
	I0128 11:42:42.230412   45138 certs.go:195] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15565-24808/.minikube/proxy-client-ca.key
	I0128 11:42:42.230502   45138 certs.go:311] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/embed-certs-384000/client.key
	I0128 11:42:42.230567   45138 certs.go:311] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/embed-certs-384000/apiserver.key.c7fa3a9e
	I0128 11:42:42.230620   45138 certs.go:311] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/embed-certs-384000/proxy-client.key
	I0128 11:42:42.230815   45138 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-24808/.minikube/certs/Users/jenkins/minikube-integration/15565-24808/.minikube/certs/25982.pem (1338 bytes)
	W0128 11:42:42.230853   45138 certs.go:397] ignoring /Users/jenkins/minikube-integration/15565-24808/.minikube/certs/Users/jenkins/minikube-integration/15565-24808/.minikube/certs/25982_empty.pem, impossibly tiny 0 bytes
	I0128 11:42:42.230864   45138 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-24808/.minikube/certs/Users/jenkins/minikube-integration/15565-24808/.minikube/certs/ca-key.pem (1675 bytes)
	I0128 11:42:42.230896   45138 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-24808/.minikube/certs/Users/jenkins/minikube-integration/15565-24808/.minikube/certs/ca.pem (1082 bytes)
	I0128 11:42:42.230930   45138 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-24808/.minikube/certs/Users/jenkins/minikube-integration/15565-24808/.minikube/certs/cert.pem (1123 bytes)
	I0128 11:42:42.230959   45138 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-24808/.minikube/certs/Users/jenkins/minikube-integration/15565-24808/.minikube/certs/key.pem (1675 bytes)
	I0128 11:42:42.231031   45138 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-24808/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15565-24808/.minikube/files/etc/ssl/certs/259822.pem (1708 bytes)
	I0128 11:42:42.231606   45138 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/embed-certs-384000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0128 11:42:42.249150   45138 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/embed-certs-384000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0128 11:42:42.266826   45138 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/embed-certs-384000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0128 11:42:42.284879   45138 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/embed-certs-384000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0128 11:42:42.302472   45138 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-24808/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0128 11:42:42.320057   45138 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-24808/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0128 11:42:42.338035   45138 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-24808/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0128 11:42:42.355766   45138 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-24808/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0128 11:42:42.373324   45138 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-24808/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0128 11:42:42.391180   45138 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-24808/.minikube/certs/25982.pem --> /usr/share/ca-certificates/25982.pem (1338 bytes)
	I0128 11:42:42.410510   45138 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-24808/.minikube/files/etc/ssl/certs/259822.pem --> /usr/share/ca-certificates/259822.pem (1708 bytes)
	I0128 11:42:42.430493   45138 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (772 bytes)
	I0128 11:42:42.444916   45138 ssh_runner.go:195] Run: openssl version
	I0128 11:42:42.450953   45138 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/259822.pem && ln -fs /usr/share/ca-certificates/259822.pem /etc/ssl/certs/259822.pem"
	I0128 11:42:42.460630   45138 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/259822.pem
	I0128 11:42:42.464979   45138 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Jan 28 18:44 /usr/share/ca-certificates/259822.pem
	I0128 11:42:42.465032   45138 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/259822.pem
	I0128 11:42:42.471307   45138 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/259822.pem /etc/ssl/certs/3ec20f2e.0"
	I0128 11:42:42.480202   45138 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0128 11:42:42.489592   45138 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0128 11:42:42.493982   45138 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Jan 28 18:40 /usr/share/ca-certificates/minikubeCA.pem
	I0128 11:42:42.494038   45138 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0128 11:42:42.500536   45138 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0128 11:42:42.509010   45138 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/25982.pem && ln -fs /usr/share/ca-certificates/25982.pem /etc/ssl/certs/25982.pem"
	I0128 11:42:42.518140   45138 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/25982.pem
	I0128 11:42:42.522841   45138 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Jan 28 18:44 /usr/share/ca-certificates/25982.pem
	I0128 11:42:42.522946   45138 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/25982.pem
	I0128 11:42:42.529209   45138 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/25982.pem /etc/ssl/certs/51391683.0"
	I0128 11:42:42.538387   45138 kubeadm.go:401] StartCluster: {Name:embed-certs-384000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:embed-certs-384000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSD
omain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVe
rsion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0128 11:42:42.538504   45138 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0128 11:42:42.562473   45138 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0128 11:42:42.571140   45138 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I0128 11:42:42.571164   45138 kubeadm.go:633] restartCluster start
	I0128 11:42:42.571234   45138 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0128 11:42:42.579885   45138 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:42:42.579972   45138 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-384000
	I0128 11:42:42.645402   45138 kubeconfig.go:135] verify returned: extract IP: "embed-certs-384000" does not appear in /Users/jenkins/minikube-integration/15565-24808/kubeconfig
	I0128 11:42:42.645580   45138 kubeconfig.go:146] "embed-certs-384000" context is missing from /Users/jenkins/minikube-integration/15565-24808/kubeconfig - will repair!
	I0128 11:42:42.645957   45138 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-24808/kubeconfig: {Name:mkd8086baee7daec2b28ba7939ebfa1d8419f5f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0128 11:42:42.647407   45138 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0128 11:42:42.655975   45138 api_server.go:165] Checking apiserver status ...
	I0128 11:42:42.656045   45138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:42:42.666224   45138 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:42:43.167059   45138 api_server.go:165] Checking apiserver status ...
	I0128 11:42:43.167306   45138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:42:43.178450   45138 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:42:43.666849   45138 api_server.go:165] Checking apiserver status ...
	I0128 11:42:43.666973   45138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:42:43.678039   45138 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:42:44.166380   45138 api_server.go:165] Checking apiserver status ...
	I0128 11:42:44.166497   45138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:42:44.176093   45138 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:42:44.668388   45138 api_server.go:165] Checking apiserver status ...
	I0128 11:42:44.668642   45138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:42:44.679846   45138 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:42:45.166371   45138 api_server.go:165] Checking apiserver status ...
	I0128 11:42:45.166474   45138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:42:45.177524   45138 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:42:45.666695   45138 api_server.go:165] Checking apiserver status ...
	I0128 11:42:45.666774   45138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:42:45.676577   45138 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:42:46.168432   45138 api_server.go:165] Checking apiserver status ...
	I0128 11:42:46.168560   45138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:42:46.179429   45138 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:42:46.666483   45138 api_server.go:165] Checking apiserver status ...
	I0128 11:42:46.666699   45138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:42:46.678039   45138 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:42:47.272991   44543 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:42:47.382160   44543 kubeadm.go:637] restartCluster took 4m10.928048519s
	W0128 11:42:47.382318   44543 out.go:239] ! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	I0128 11:42:47.382349   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0128 11:42:47.797020   44543 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0128 11:42:47.806926   44543 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0128 11:42:47.814650   44543 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0128 11:42:47.814700   44543 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0128 11:42:47.822491   44543 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0128 11:42:47.822534   44543 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0128 11:42:47.871357   44543 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0128 11:42:47.871408   44543 kubeadm.go:322] [preflight] Running pre-flight checks
	I0128 11:42:48.169055   44543 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0128 11:42:48.169165   44543 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0128 11:42:48.169287   44543 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0128 11:42:48.396407   44543 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0128 11:42:48.397257   44543 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0128 11:42:48.403754   44543 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0128 11:42:48.464466   44543 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0128 11:42:48.486240   44543 out.go:204]   - Generating certificates and keys ...
	I0128 11:42:48.486314   44543 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0128 11:42:48.486394   44543 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0128 11:42:48.486495   44543 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0128 11:42:48.486568   44543 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0128 11:42:48.486651   44543 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0128 11:42:48.486728   44543 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0128 11:42:48.486820   44543 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0128 11:42:48.486878   44543 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0128 11:42:48.486955   44543 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0128 11:42:48.487057   44543 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0128 11:42:48.487098   44543 kubeadm.go:322] [certs] Using the existing "sa" key
	I0128 11:42:48.487160   44543 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0128 11:42:48.621957   44543 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0128 11:42:48.689789   44543 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0128 11:42:48.859512   44543 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0128 11:42:48.953322   44543 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0128 11:42:48.953809   44543 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0128 11:42:48.975344   44543 out.go:204]   - Booting up control plane ...
	I0128 11:42:48.975447   44543 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0128 11:42:48.975516   44543 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0128 11:42:48.975585   44543 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0128 11:42:48.975669   44543 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0128 11:42:48.975833   44543 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0128 11:42:47.166380   45138 api_server.go:165] Checking apiserver status ...
	I0128 11:42:47.166538   45138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:42:47.175992   45138 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:42:47.667292   45138 api_server.go:165] Checking apiserver status ...
	I0128 11:42:47.667404   45138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:42:47.678461   45138 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:42:48.166890   45138 api_server.go:165] Checking apiserver status ...
	I0128 11:42:48.167018   45138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:42:48.176722   45138 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:42:48.668330   45138 api_server.go:165] Checking apiserver status ...
	I0128 11:42:48.668414   45138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:42:48.680286   45138 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:42:49.167619   45138 api_server.go:165] Checking apiserver status ...
	I0128 11:42:49.167719   45138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:42:49.178169   45138 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:42:49.668412   45138 api_server.go:165] Checking apiserver status ...
	I0128 11:42:49.668645   45138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:42:49.679640   45138 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:42:50.166683   45138 api_server.go:165] Checking apiserver status ...
	I0128 11:42:50.166878   45138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:42:50.177416   45138 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:42:50.666454   45138 api_server.go:165] Checking apiserver status ...
	I0128 11:42:50.666521   45138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:42:50.676263   45138 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:42:51.168363   45138 api_server.go:165] Checking apiserver status ...
	I0128 11:42:51.168545   45138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:42:51.179451   45138 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:42:51.667033   45138 api_server.go:165] Checking apiserver status ...
	I0128 11:42:51.667135   45138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:42:51.678431   45138 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:42:52.166375   45138 api_server.go:165] Checking apiserver status ...
	I0128 11:42:52.166450   45138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:42:52.176221   45138 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:42:52.667966   45138 api_server.go:165] Checking apiserver status ...
	I0128 11:42:52.668083   45138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:42:52.678764   45138 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:42:52.678774   45138 api_server.go:165] Checking apiserver status ...
	I0128 11:42:52.678831   45138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:42:52.687269   45138 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:42:52.687281   45138 kubeadm.go:608] needs reconfigure: apiserver error: timed out waiting for the condition
	I0128 11:42:52.687288   45138 kubeadm.go:1120] stopping kube-system containers ...
	I0128 11:42:52.687357   45138 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0128 11:42:52.712314   45138 docker.go:456] Stopping containers: [eabd6636cb7e d7b276456a60 5330228181e0 f56d7ec06a16 5013410a9ba4 de687efaf3b3 604e84ad9d9e ec7992e8a0c7 dc1242004649 8c060accd3a4 a7c8b1189c51 8627e0a8d674 4cfcb6ffd0d6 7eccbc0a0feb 03aaf95c0963 02079ae63327]
	I0128 11:42:52.712400   45138 ssh_runner.go:195] Run: docker stop eabd6636cb7e d7b276456a60 5330228181e0 f56d7ec06a16 5013410a9ba4 de687efaf3b3 604e84ad9d9e ec7992e8a0c7 dc1242004649 8c060accd3a4 a7c8b1189c51 8627e0a8d674 4cfcb6ffd0d6 7eccbc0a0feb 03aaf95c0963 02079ae63327
	I0128 11:42:52.737939   45138 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0128 11:42:52.748551   45138 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0128 11:42:52.756342   45138 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Jan 28 19:41 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Jan 28 19:41 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2011 Jan 28 19:41 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Jan 28 19:41 /etc/kubernetes/scheduler.conf
	
	I0128 11:42:52.756395   45138 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0128 11:42:52.764090   45138 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0128 11:42:52.771772   45138 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0128 11:42:52.779198   45138 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:42:52.779249   45138 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0128 11:42:52.786553   45138 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0128 11:42:52.794280   45138 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:42:52.794329   45138 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0128 11:42:52.801711   45138 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0128 11:42:52.809308   45138 kubeadm.go:710] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0128 11:42:52.809322   45138 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0128 11:42:52.861875   45138 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0128 11:42:53.579573   45138 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0128 11:42:53.711382   45138 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0128 11:42:53.775419   45138 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0128 11:42:53.920340   45138 api_server.go:51] waiting for apiserver process to appear ...
	I0128 11:42:53.920407   45138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:42:54.430690   45138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:42:54.930670   45138 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:42:54.943196   45138 api_server.go:71] duration metric: took 1.022856749s to wait for apiserver process to appear ...
	I0128 11:42:54.943231   45138 api_server.go:87] waiting for apiserver healthz status ...
	I0128 11:42:54.943247   45138 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:63149/healthz ...
	I0128 11:42:54.944701   45138 api_server.go:268] stopped: https://127.0.0.1:63149/healthz: Get "https://127.0.0.1:63149/healthz": EOF
	I0128 11:42:55.446786   45138 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:63149/healthz ...
	I0128 11:42:57.515760   45138 api_server.go:278] https://127.0.0.1:63149/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0128 11:42:57.515780   45138 api_server.go:102] status: https://127.0.0.1:63149/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0128 11:42:57.944942   45138 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:63149/healthz ...
	I0128 11:42:57.950903   45138 api_server.go:278] https://127.0.0.1:63149/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0128 11:42:57.950916   45138 api_server.go:102] status: https://127.0.0.1:63149/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0128 11:42:58.445207   45138 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:63149/healthz ...
	I0128 11:42:58.450150   45138 api_server.go:278] https://127.0.0.1:63149/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0128 11:42:58.450165   45138 api_server.go:102] status: https://127.0.0.1:63149/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0128 11:42:58.944797   45138 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:63149/healthz ...
	I0128 11:42:58.950180   45138 api_server.go:278] https://127.0.0.1:63149/healthz returned 200:
	ok
	I0128 11:42:58.956966   45138 api_server.go:140] control plane version: v1.26.1
	I0128 11:42:58.956982   45138 api_server.go:130] duration metric: took 4.013735392s to wait for apiserver health ...
	I0128 11:42:58.956988   45138 cni.go:84] Creating CNI manager for ""
	I0128 11:42:58.956996   45138 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0128 11:42:58.980344   45138 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0128 11:42:59.001245   45138 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0128 11:42:59.009999   45138 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0128 11:42:59.024076   45138 system_pods.go:43] waiting for kube-system pods to appear ...
	I0128 11:42:59.031394   45138 system_pods.go:59] 8 kube-system pods found
	I0128 11:42:59.031410   45138 system_pods.go:61] "coredns-787d4945fb-g7ckz" [1414dbe7-8522-4721-8a37-a7a811be2380] Running
	I0128 11:42:59.031414   45138 system_pods.go:61] "etcd-embed-certs-384000" [487850bd-51f2-417e-8b5c-41fb62129ab9] Running
	I0128 11:42:59.031419   45138 system_pods.go:61] "kube-apiserver-embed-certs-384000" [ef5ae937-7a95-4132-be19-a5e159f05f9d] Running
	I0128 11:42:59.031423   45138 system_pods.go:61] "kube-controller-manager-embed-certs-384000" [d7f1b11c-c851-4c88-b7b3-e39ae090da9b] Running
	I0128 11:42:59.031427   45138 system_pods.go:61] "kube-proxy-52xxl" [8ebd7747-2f3e-4924-9b0f-e4f168a412e6] Running
	I0128 11:42:59.031432   45138 system_pods.go:61] "kube-scheduler-embed-certs-384000" [8c30ae32-7065-45c1-84ec-a3177b04a06c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0128 11:42:59.031437   45138 system_pods.go:61] "metrics-server-7997d45854-vpfz4" [86fd97c0-17ab-41dd-8126-fe70b9c6c453] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0128 11:42:59.031441   45138 system_pods.go:61] "storage-provisioner" [40300de6-a73e-455a-8bce-e7fa25e777b5] Running
	I0128 11:42:59.031445   45138 system_pods.go:74] duration metric: took 7.358114ms to wait for pod list to return data ...
	I0128 11:42:59.031451   45138 node_conditions.go:102] verifying NodePressure condition ...
	I0128 11:42:59.033964   45138 node_conditions.go:122] node storage ephemeral capacity is 107016164Ki
	I0128 11:42:59.033981   45138 node_conditions.go:123] node cpu capacity is 6
	I0128 11:42:59.033990   45138 node_conditions.go:105] duration metric: took 2.534277ms to run NodePressure ...
	I0128 11:42:59.034005   45138 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0128 11:42:59.319097   45138 kubeadm.go:769] waiting for restarted kubelet to initialise ...
	I0128 11:42:59.324486   45138 kubeadm.go:784] kubelet initialised
	I0128 11:42:59.324500   45138 kubeadm.go:785] duration metric: took 5.389223ms waiting for restarted kubelet to initialise ...
	I0128 11:42:59.324507   45138 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0128 11:42:59.329650   45138 pod_ready.go:78] waiting up to 4m0s for pod "coredns-787d4945fb-g7ckz" in "kube-system" namespace to be "Ready" ...
	I0128 11:42:59.335537   45138 pod_ready.go:92] pod "coredns-787d4945fb-g7ckz" in "kube-system" namespace has status "Ready":"True"
	I0128 11:42:59.335548   45138 pod_ready.go:81] duration metric: took 5.885043ms waiting for pod "coredns-787d4945fb-g7ckz" in "kube-system" namespace to be "Ready" ...
	I0128 11:42:59.335556   45138 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-384000" in "kube-system" namespace to be "Ready" ...
	I0128 11:42:59.340814   45138 pod_ready.go:92] pod "etcd-embed-certs-384000" in "kube-system" namespace has status "Ready":"True"
	I0128 11:42:59.340829   45138 pod_ready.go:81] duration metric: took 5.26653ms waiting for pod "etcd-embed-certs-384000" in "kube-system" namespace to be "Ready" ...
	I0128 11:42:59.340843   45138 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-384000" in "kube-system" namespace to be "Ready" ...
	I0128 11:42:59.346730   45138 pod_ready.go:92] pod "kube-apiserver-embed-certs-384000" in "kube-system" namespace has status "Ready":"True"
	I0128 11:42:59.346743   45138 pod_ready.go:81] duration metric: took 5.889477ms waiting for pod "kube-apiserver-embed-certs-384000" in "kube-system" namespace to be "Ready" ...
	I0128 11:42:59.346754   45138 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-384000" in "kube-system" namespace to be "Ready" ...
	I0128 11:42:59.427679   45138 pod_ready.go:92] pod "kube-controller-manager-embed-certs-384000" in "kube-system" namespace has status "Ready":"True"
	I0128 11:42:59.427691   45138 pod_ready.go:81] duration metric: took 80.929952ms waiting for pod "kube-controller-manager-embed-certs-384000" in "kube-system" namespace to be "Ready" ...
	I0128 11:42:59.427700   45138 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-52xxl" in "kube-system" namespace to be "Ready" ...
	I0128 11:42:59.829600   45138 pod_ready.go:92] pod "kube-proxy-52xxl" in "kube-system" namespace has status "Ready":"True"
	I0128 11:42:59.829627   45138 pod_ready.go:81] duration metric: took 401.903894ms waiting for pod "kube-proxy-52xxl" in "kube-system" namespace to be "Ready" ...
	I0128 11:42:59.829642   45138 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-384000" in "kube-system" namespace to be "Ready" ...
	I0128 11:43:02.236007   45138 pod_ready.go:102] pod "kube-scheduler-embed-certs-384000" in "kube-system" namespace has status "Ready":"False"
	I0128 11:43:04.735901   45138 pod_ready.go:102] pod "kube-scheduler-embed-certs-384000" in "kube-system" namespace has status "Ready":"False"
	I0128 11:43:07.234697   45138 pod_ready.go:102] pod "kube-scheduler-embed-certs-384000" in "kube-system" namespace has status "Ready":"False"
	I0128 11:43:09.235447   45138 pod_ready.go:102] pod "kube-scheduler-embed-certs-384000" in "kube-system" namespace has status "Ready":"False"
	I0128 11:43:10.235737   45138 pod_ready.go:92] pod "kube-scheduler-embed-certs-384000" in "kube-system" namespace has status "Ready":"True"
	I0128 11:43:10.235751   45138 pod_ready.go:81] duration metric: took 10.406077933s waiting for pod "kube-scheduler-embed-certs-384000" in "kube-system" namespace to be "Ready" ...
	I0128 11:43:10.235758   45138 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-7997d45854-vpfz4" in "kube-system" namespace to be "Ready" ...
	I0128 11:43:12.249081   45138 pod_ready.go:102] pod "metrics-server-7997d45854-vpfz4" in "kube-system" namespace has status "Ready":"False"
	I0128 11:43:14.249452   45138 pod_ready.go:102] pod "metrics-server-7997d45854-vpfz4" in "kube-system" namespace has status "Ready":"False"
	I0128 11:43:16.749752   45138 pod_ready.go:102] pod "metrics-server-7997d45854-vpfz4" in "kube-system" namespace has status "Ready":"False"
	I0128 11:43:19.247075   45138 pod_ready.go:102] pod "metrics-server-7997d45854-vpfz4" in "kube-system" namespace has status "Ready":"False"
	I0128 11:43:21.749325   45138 pod_ready.go:102] pod "metrics-server-7997d45854-vpfz4" in "kube-system" namespace has status "Ready":"False"
	I0128 11:43:24.248866   45138 pod_ready.go:102] pod "metrics-server-7997d45854-vpfz4" in "kube-system" namespace has status "Ready":"False"
	I0128 11:43:26.746839   45138 pod_ready.go:102] pod "metrics-server-7997d45854-vpfz4" in "kube-system" namespace has status "Ready":"False"
	I0128 11:43:28.962744   44543 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0128 11:43:28.964132   44543 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0128 11:43:28.964338   44543 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0128 11:43:28.749523   45138 pod_ready.go:102] pod "metrics-server-7997d45854-vpfz4" in "kube-system" namespace has status "Ready":"False"
	I0128 11:43:30.749569   45138 pod_ready.go:102] pod "metrics-server-7997d45854-vpfz4" in "kube-system" namespace has status "Ready":"False"
	I0128 11:43:33.964812   44543 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0128 11:43:33.965223   44543 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0128 11:43:33.248893   45138 pod_ready.go:102] pod "metrics-server-7997d45854-vpfz4" in "kube-system" namespace has status "Ready":"False"
	I0128 11:43:35.249605   45138 pod_ready.go:102] pod "metrics-server-7997d45854-vpfz4" in "kube-system" namespace has status "Ready":"False"
	I0128 11:43:37.747874   45138 pod_ready.go:102] pod "metrics-server-7997d45854-vpfz4" in "kube-system" namespace has status "Ready":"False"
	I0128 11:43:39.749598   45138 pod_ready.go:102] pod "metrics-server-7997d45854-vpfz4" in "kube-system" namespace has status "Ready":"False"
	I0128 11:43:43.966115   44543 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0128 11:43:43.966280   44543 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0128 11:43:42.248316   45138 pod_ready.go:102] pod "metrics-server-7997d45854-vpfz4" in "kube-system" namespace has status "Ready":"False"
	I0128 11:43:44.249460   45138 pod_ready.go:102] pod "metrics-server-7997d45854-vpfz4" in "kube-system" namespace has status "Ready":"False"
	I0128 11:43:46.746174   45138 pod_ready.go:102] pod "metrics-server-7997d45854-vpfz4" in "kube-system" namespace has status "Ready":"False"
	I0128 11:43:48.748733   45138 pod_ready.go:102] pod "metrics-server-7997d45854-vpfz4" in "kube-system" namespace has status "Ready":"False"
	I0128 11:43:51.247440   45138 pod_ready.go:102] pod "metrics-server-7997d45854-vpfz4" in "kube-system" namespace has status "Ready":"False"
	I0128 11:43:53.247967   45138 pod_ready.go:102] pod "metrics-server-7997d45854-vpfz4" in "kube-system" namespace has status "Ready":"False"
	I0128 11:43:55.248343   45138 pod_ready.go:102] pod "metrics-server-7997d45854-vpfz4" in "kube-system" namespace has status "Ready":"False"
	I0128 11:43:57.249371   45138 pod_ready.go:102] pod "metrics-server-7997d45854-vpfz4" in "kube-system" namespace has status "Ready":"False"
	I0128 11:43:59.249884   45138 pod_ready.go:102] pod "metrics-server-7997d45854-vpfz4" in "kube-system" namespace has status "Ready":"False"
	I0128 11:44:01.748783   45138 pod_ready.go:102] pod "metrics-server-7997d45854-vpfz4" in "kube-system" namespace has status "Ready":"False"
	I0128 11:44:03.967247   44543 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0128 11:44:03.967407   44543 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0128 11:44:04.249304   45138 pod_ready.go:102] pod "metrics-server-7997d45854-vpfz4" in "kube-system" namespace has status "Ready":"False"
	I0128 11:44:06.747858   45138 pod_ready.go:102] pod "metrics-server-7997d45854-vpfz4" in "kube-system" namespace has status "Ready":"False"
	I0128 11:44:08.749967   45138 pod_ready.go:102] pod "metrics-server-7997d45854-vpfz4" in "kube-system" namespace has status "Ready":"False"
	I0128 11:44:11.247754   45138 pod_ready.go:102] pod "metrics-server-7997d45854-vpfz4" in "kube-system" namespace has status "Ready":"False"
	I0128 11:44:13.248092   45138 pod_ready.go:102] pod "metrics-server-7997d45854-vpfz4" in "kube-system" namespace has status "Ready":"False"
	I0128 11:44:15.249464   45138 pod_ready.go:102] pod "metrics-server-7997d45854-vpfz4" in "kube-system" namespace has status "Ready":"False"
	I0128 11:44:17.746869   45138 pod_ready.go:102] pod "metrics-server-7997d45854-vpfz4" in "kube-system" namespace has status "Ready":"False"
	I0128 11:44:19.747153   45138 pod_ready.go:102] pod "metrics-server-7997d45854-vpfz4" in "kube-system" namespace has status "Ready":"False"
	I0128 11:44:21.750039   45138 pod_ready.go:102] pod "metrics-server-7997d45854-vpfz4" in "kube-system" namespace has status "Ready":"False"
	I0128 11:44:24.247774   45138 pod_ready.go:102] pod "metrics-server-7997d45854-vpfz4" in "kube-system" namespace has status "Ready":"False"
	I0128 11:44:26.249100   45138 pod_ready.go:102] pod "metrics-server-7997d45854-vpfz4" in "kube-system" namespace has status "Ready":"False"
	I0128 11:44:28.747865   45138 pod_ready.go:102] pod "metrics-server-7997d45854-vpfz4" in "kube-system" namespace has status "Ready":"False"
	I0128 11:44:30.749051   45138 pod_ready.go:102] pod "metrics-server-7997d45854-vpfz4" in "kube-system" namespace has status "Ready":"False"
	I0128 11:44:33.252752   45138 pod_ready.go:102] pod "metrics-server-7997d45854-vpfz4" in "kube-system" namespace has status "Ready":"False"
	I0128 11:44:35.749056   45138 pod_ready.go:102] pod "metrics-server-7997d45854-vpfz4" in "kube-system" namespace has status "Ready":"False"
	I0128 11:44:37.749568   45138 pod_ready.go:102] pod "metrics-server-7997d45854-vpfz4" in "kube-system" namespace has status "Ready":"False"
	I0128 11:44:40.246246   45138 pod_ready.go:102] pod "metrics-server-7997d45854-vpfz4" in "kube-system" namespace has status "Ready":"False"
	I0128 11:44:43.968381   44543 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0128 11:44:43.968526   44543 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0128 11:44:43.968543   44543 kubeadm.go:322] 
	I0128 11:44:43.968585   44543 kubeadm.go:322] Unfortunately, an error has occurred:
	I0128 11:44:43.968640   44543 kubeadm.go:322] 	timed out waiting for the condition
	I0128 11:44:43.968649   44543 kubeadm.go:322] 
	I0128 11:44:43.968675   44543 kubeadm.go:322] This error is likely caused by:
	I0128 11:44:43.968705   44543 kubeadm.go:322] 	- The kubelet is not running
	I0128 11:44:43.968789   44543 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0128 11:44:43.968795   44543 kubeadm.go:322] 
	I0128 11:44:43.968869   44543 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0128 11:44:43.968905   44543 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0128 11:44:43.968937   44543 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0128 11:44:43.968944   44543 kubeadm.go:322] 
	I0128 11:44:43.969045   44543 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0128 11:44:43.969126   44543 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0128 11:44:43.969199   44543 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0128 11:44:43.969235   44543 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0128 11:44:43.969296   44543 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0128 11:44:43.969323   44543 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0128 11:44:43.972133   44543 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0128 11:44:43.972212   44543 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0128 11:44:43.972318   44543 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.23. Latest validated version: 18.09
	I0128 11:44:43.972401   44543 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0128 11:44:43.972487   44543 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0128 11:44:43.972572   44543 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	W0128 11:44:43.972733   44543 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.23. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0128 11:44:43.972765   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0128 11:44:44.386961   44543 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0128 11:44:44.397184   44543 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0128 11:44:44.397242   44543 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0128 11:44:44.404787   44543 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0128 11:44:44.404808   44543 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0128 11:44:44.454297   44543 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0128 11:44:44.454347   44543 kubeadm.go:322] [preflight] Running pre-flight checks
	I0128 11:44:44.765732   44543 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0128 11:44:44.765807   44543 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0128 11:44:44.765871   44543 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0128 11:44:44.988444   44543 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0128 11:44:44.989287   44543 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0128 11:44:44.995905   44543 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0128 11:44:45.064719   44543 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0128 11:44:45.086245   44543 out.go:204]   - Generating certificates and keys ...
	I0128 11:44:45.086316   44543 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0128 11:44:45.086376   44543 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0128 11:44:45.086477   44543 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0128 11:44:45.086524   44543 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0128 11:44:45.086582   44543 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0128 11:44:45.086618   44543 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0128 11:44:45.086687   44543 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0128 11:44:45.086737   44543 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0128 11:44:45.086806   44543 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0128 11:44:45.086872   44543 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0128 11:44:45.086906   44543 kubeadm.go:322] [certs] Using the existing "sa" key
	I0128 11:44:45.086947   44543 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0128 11:44:45.154429   44543 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0128 11:44:45.276707   44543 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0128 11:44:45.405556   44543 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0128 11:44:45.590172   44543 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0128 11:44:45.590722   44543 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0128 11:44:45.612150   44543 out.go:204]   - Booting up control plane ...
	I0128 11:44:45.612238   44543 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0128 11:44:45.612320   44543 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0128 11:44:45.612402   44543 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0128 11:44:45.612474   44543 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0128 11:44:45.612619   44543 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0128 11:44:42.249776   45138 pod_ready.go:102] pod "metrics-server-7997d45854-vpfz4" in "kube-system" namespace has status "Ready":"False"
	I0128 11:44:44.746540   45138 pod_ready.go:102] pod "metrics-server-7997d45854-vpfz4" in "kube-system" namespace has status "Ready":"False"
	I0128 11:44:46.749813   45138 pod_ready.go:102] pod "metrics-server-7997d45854-vpfz4" in "kube-system" namespace has status "Ready":"False"
	I0128 11:44:49.247916   45138 pod_ready.go:102] pod "metrics-server-7997d45854-vpfz4" in "kube-system" namespace has status "Ready":"False"
	I0128 11:44:51.248612   45138 pod_ready.go:102] pod "metrics-server-7997d45854-vpfz4" in "kube-system" namespace has status "Ready":"False"
	I0128 11:44:53.747902   45138 pod_ready.go:102] pod "metrics-server-7997d45854-vpfz4" in "kube-system" namespace has status "Ready":"False"
	I0128 11:44:55.749811   45138 pod_ready.go:102] pod "metrics-server-7997d45854-vpfz4" in "kube-system" namespace has status "Ready":"False"
	I0128 11:44:58.248231   45138 pod_ready.go:102] pod "metrics-server-7997d45854-vpfz4" in "kube-system" namespace has status "Ready":"False"
	I0128 11:45:00.747233   45138 pod_ready.go:102] pod "metrics-server-7997d45854-vpfz4" in "kube-system" namespace has status "Ready":"False"
	I0128 11:45:03.248865   45138 pod_ready.go:102] pod "metrics-server-7997d45854-vpfz4" in "kube-system" namespace has status "Ready":"False"
	I0128 11:45:05.746604   45138 pod_ready.go:102] pod "metrics-server-7997d45854-vpfz4" in "kube-system" namespace has status "Ready":"False"
	I0128 11:45:07.750034   45138 pod_ready.go:102] pod "metrics-server-7997d45854-vpfz4" in "kube-system" namespace has status "Ready":"False"
	I0128 11:45:10.249061   45138 pod_ready.go:102] pod "metrics-server-7997d45854-vpfz4" in "kube-system" namespace has status "Ready":"False"
	I0128 11:45:12.748559   45138 pod_ready.go:102] pod "metrics-server-7997d45854-vpfz4" in "kube-system" namespace has status "Ready":"False"
	I0128 11:45:14.748621   45138 pod_ready.go:102] pod "metrics-server-7997d45854-vpfz4" in "kube-system" namespace has status "Ready":"False"
	I0128 11:45:16.749265   45138 pod_ready.go:102] pod "metrics-server-7997d45854-vpfz4" in "kube-system" namespace has status "Ready":"False"
	I0128 11:45:19.249201   45138 pod_ready.go:102] pod "metrics-server-7997d45854-vpfz4" in "kube-system" namespace has status "Ready":"False"
	I0128 11:45:21.250651   45138 pod_ready.go:102] pod "metrics-server-7997d45854-vpfz4" in "kube-system" namespace has status "Ready":"False"
	I0128 11:45:25.601363   44543 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0128 11:45:25.602304   44543 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0128 11:45:25.602557   44543 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0128 11:45:23.749507   45138 pod_ready.go:102] pod "metrics-server-7997d45854-vpfz4" in "kube-system" namespace has status "Ready":"False"
	I0128 11:45:25.750609   45138 pod_ready.go:102] pod "metrics-server-7997d45854-vpfz4" in "kube-system" namespace has status "Ready":"False"
	I0128 11:45:30.604110   44543 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0128 11:45:30.604416   44543 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0128 11:45:28.251141   45138 pod_ready.go:102] pod "metrics-server-7997d45854-vpfz4" in "kube-system" namespace has status "Ready":"False"
	I0128 11:45:30.749042   45138 pod_ready.go:102] pod "metrics-server-7997d45854-vpfz4" in "kube-system" namespace has status "Ready":"False"
	I0128 11:45:32.752262   45138 pod_ready.go:102] pod "metrics-server-7997d45854-vpfz4" in "kube-system" namespace has status "Ready":"False"
	I0128 11:45:35.250505   45138 pod_ready.go:102] pod "metrics-server-7997d45854-vpfz4" in "kube-system" namespace has status "Ready":"False"
	I0128 11:45:40.606062   44543 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0128 11:45:40.606272   44543 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0128 11:45:37.251025   45138 pod_ready.go:102] pod "metrics-server-7997d45854-vpfz4" in "kube-system" namespace has status "Ready":"False"
	I0128 11:45:39.251753   45138 pod_ready.go:102] pod "metrics-server-7997d45854-vpfz4" in "kube-system" namespace has status "Ready":"False"
	I0128 11:45:41.252084   45138 pod_ready.go:102] pod "metrics-server-7997d45854-vpfz4" in "kube-system" namespace has status "Ready":"False"
	I0128 11:45:43.749607   45138 pod_ready.go:102] pod "metrics-server-7997d45854-vpfz4" in "kube-system" namespace has status "Ready":"False"
	I0128 11:45:46.251392   45138 pod_ready.go:102] pod "metrics-server-7997d45854-vpfz4" in "kube-system" namespace has status "Ready":"False"
	I0128 11:45:48.252119   45138 pod_ready.go:102] pod "metrics-server-7997d45854-vpfz4" in "kube-system" namespace has status "Ready":"False"
	I0128 11:45:50.752836   45138 pod_ready.go:102] pod "metrics-server-7997d45854-vpfz4" in "kube-system" namespace has status "Ready":"False"
	I0128 11:45:53.249890   45138 pod_ready.go:102] pod "metrics-server-7997d45854-vpfz4" in "kube-system" namespace has status "Ready":"False"
	I0128 11:45:55.252861   45138 pod_ready.go:102] pod "metrics-server-7997d45854-vpfz4" in "kube-system" namespace has status "Ready":"False"
	I0128 11:46:00.608193   44543 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0128 11:46:00.608357   44543 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0128 11:45:57.750321   45138 pod_ready.go:102] pod "metrics-server-7997d45854-vpfz4" in "kube-system" namespace has status "Ready":"False"
	I0128 11:45:59.752631   45138 pod_ready.go:102] pod "metrics-server-7997d45854-vpfz4" in "kube-system" namespace has status "Ready":"False"
	I0128 11:46:02.250714   45138 pod_ready.go:102] pod "metrics-server-7997d45854-vpfz4" in "kube-system" namespace has status "Ready":"False"
	I0128 11:46:04.752058   45138 pod_ready.go:102] pod "metrics-server-7997d45854-vpfz4" in "kube-system" namespace has status "Ready":"False"
	I0128 11:46:07.251722   45138 pod_ready.go:102] pod "metrics-server-7997d45854-vpfz4" in "kube-system" namespace has status "Ready":"False"
	I0128 11:46:09.251917   45138 pod_ready.go:102] pod "metrics-server-7997d45854-vpfz4" in "kube-system" namespace has status "Ready":"False"
	I0128 11:46:11.751140   45138 pod_ready.go:102] pod "metrics-server-7997d45854-vpfz4" in "kube-system" namespace has status "Ready":"False"
	I0128 11:46:13.751294   45138 pod_ready.go:102] pod "metrics-server-7997d45854-vpfz4" in "kube-system" namespace has status "Ready":"False"
	I0128 11:46:16.250673   45138 pod_ready.go:102] pod "metrics-server-7997d45854-vpfz4" in "kube-system" namespace has status "Ready":"False"
	I0128 11:46:18.251986   45138 pod_ready.go:102] pod "metrics-server-7997d45854-vpfz4" in "kube-system" namespace has status "Ready":"False"
	I0128 11:46:20.752748   45138 pod_ready.go:102] pod "metrics-server-7997d45854-vpfz4" in "kube-system" namespace has status "Ready":"False"
	I0128 11:46:23.250853   45138 pod_ready.go:102] pod "metrics-server-7997d45854-vpfz4" in "kube-system" namespace has status "Ready":"False"
	I0128 11:46:25.253045   45138 pod_ready.go:102] pod "metrics-server-7997d45854-vpfz4" in "kube-system" namespace has status "Ready":"False"
	I0128 11:46:27.752207   45138 pod_ready.go:102] pod "metrics-server-7997d45854-vpfz4" in "kube-system" namespace has status "Ready":"False"
	I0128 11:46:30.251800   45138 pod_ready.go:102] pod "metrics-server-7997d45854-vpfz4" in "kube-system" namespace has status "Ready":"False"
	I0128 11:46:32.751484   45138 pod_ready.go:102] pod "metrics-server-7997d45854-vpfz4" in "kube-system" namespace has status "Ready":"False"
	I0128 11:46:34.752642   45138 pod_ready.go:102] pod "metrics-server-7997d45854-vpfz4" in "kube-system" namespace has status "Ready":"False"
	I0128 11:46:40.610642   44543 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0128 11:46:40.610865   44543 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0128 11:46:40.610890   44543 kubeadm.go:322] 
	I0128 11:46:40.610939   44543 kubeadm.go:322] Unfortunately, an error has occurred:
	I0128 11:46:40.610985   44543 kubeadm.go:322] 	timed out waiting for the condition
	I0128 11:46:40.610998   44543 kubeadm.go:322] 
	I0128 11:46:40.611035   44543 kubeadm.go:322] This error is likely caused by:
	I0128 11:46:40.611068   44543 kubeadm.go:322] 	- The kubelet is not running
	I0128 11:46:40.611196   44543 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0128 11:46:40.611214   44543 kubeadm.go:322] 
	I0128 11:46:40.611326   44543 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0128 11:46:40.611369   44543 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0128 11:46:40.611401   44543 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0128 11:46:40.611408   44543 kubeadm.go:322] 
	I0128 11:46:40.611532   44543 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0128 11:46:40.611612   44543 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0128 11:46:40.611700   44543 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0128 11:46:40.611735   44543 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0128 11:46:40.611794   44543 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0128 11:46:40.611822   44543 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0128 11:46:40.614670   44543 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0128 11:46:40.614732   44543 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0128 11:46:40.614833   44543 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.23. Latest validated version: 18.09
	I0128 11:46:40.614908   44543 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0128 11:46:40.614982   44543 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0128 11:46:40.615043   44543 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0128 11:46:40.615060   44543 kubeadm.go:403] StartCluster complete in 8m4.18836856s
	I0128 11:46:40.615155   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0128 11:46:40.638401   44543 logs.go:279] 0 containers: []
	W0128 11:46:40.638414   44543 logs.go:281] No container was found matching "kube-apiserver"
	I0128 11:46:40.638487   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0128 11:46:40.662066   44543 logs.go:279] 0 containers: []
	W0128 11:46:40.662081   44543 logs.go:281] No container was found matching "etcd"
	I0128 11:46:40.662163   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0128 11:46:40.684921   44543 logs.go:279] 0 containers: []
	W0128 11:46:40.684935   44543 logs.go:281] No container was found matching "coredns"
	I0128 11:46:40.685002   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0128 11:46:40.707757   44543 logs.go:279] 0 containers: []
	W0128 11:46:40.707770   44543 logs.go:281] No container was found matching "kube-scheduler"
	I0128 11:46:40.707838   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0128 11:46:40.732010   44543 logs.go:279] 0 containers: []
	W0128 11:46:40.732024   44543 logs.go:281] No container was found matching "kube-proxy"
	I0128 11:46:40.732097   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0128 11:46:40.756267   44543 logs.go:279] 0 containers: []
	W0128 11:46:40.756281   44543 logs.go:281] No container was found matching "kubernetes-dashboard"
	I0128 11:46:40.756349   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0128 11:46:40.779588   44543 logs.go:279] 0 containers: []
	W0128 11:46:40.779605   44543 logs.go:281] No container was found matching "storage-provisioner"
	I0128 11:46:40.779687   44543 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0128 11:46:40.803900   44543 logs.go:279] 0 containers: []
	W0128 11:46:40.803913   44543 logs.go:281] No container was found matching "kube-controller-manager"
	I0128 11:46:40.803920   44543 logs.go:124] Gathering logs for Docker ...
	I0128 11:46:40.803928   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0128 11:46:40.819641   44543 logs.go:124] Gathering logs for container status ...
	I0128 11:46:40.819654   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0128 11:46:37.251575   45138 pod_ready.go:102] pod "metrics-server-7997d45854-vpfz4" in "kube-system" namespace has status "Ready":"False"
	I0128 11:46:39.750310   45138 pod_ready.go:102] pod "metrics-server-7997d45854-vpfz4" in "kube-system" namespace has status "Ready":"False"
	I0128 11:46:41.750992   45138 pod_ready.go:102] pod "metrics-server-7997d45854-vpfz4" in "kube-system" namespace has status "Ready":"False"
	I0128 11:46:42.870991   44543 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.051318213s)
	I0128 11:46:42.871102   44543 logs.go:124] Gathering logs for kubelet ...
	I0128 11:46:42.871109   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0128 11:46:42.908391   44543 logs.go:124] Gathering logs for dmesg ...
	I0128 11:46:42.908404   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0128 11:46:42.920405   44543 logs.go:124] Gathering logs for describe nodes ...
	I0128 11:46:42.920418   44543 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0128 11:46:42.976791   44543 logs.go:131] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W0128 11:46:42.976808   44543 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.23. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0128 11:46:42.976822   44543 out.go:239] * 
	W0128 11:46:42.976929   44543 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.23. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0128 11:46:42.976981   44543 out.go:239] * 
	W0128 11:46:42.977584   44543 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0128 11:46:43.064329   44543 out.go:177] 
	W0128 11:46:43.107261   44543 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.23. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0128 11:46:43.107329   44543 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0128 11:46:43.107364   44543 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0128 11:46:43.150230   44543 out.go:177] 
	
	* 
	* ==> Docker <==
	* -- Logs begin at Sat 2023-01-28 19:38:32 UTC, end at Sat 2023-01-28 19:46:44 UTC. --
	Jan 28 19:38:35 old-k8s-version-182000 systemd[1]: Started Docker Application Container Engine.
	Jan 28 19:38:35 old-k8s-version-182000 systemd[1]: Stopping Docker Application Container Engine...
	Jan 28 19:38:35 old-k8s-version-182000 dockerd[436]: time="2023-01-28T19:38:35.469071572Z" level=info msg="Processing signal 'terminated'"
	Jan 28 19:38:35 old-k8s-version-182000 dockerd[436]: time="2023-01-28T19:38:35.469910601Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jan 28 19:38:35 old-k8s-version-182000 dockerd[436]: time="2023-01-28T19:38:35.470150136Z" level=info msg="Daemon shutdown complete"
	Jan 28 19:38:35 old-k8s-version-182000 systemd[1]: docker.service: Succeeded.
	Jan 28 19:38:35 old-k8s-version-182000 systemd[1]: Stopped Docker Application Container Engine.
	Jan 28 19:38:35 old-k8s-version-182000 systemd[1]: Starting Docker Application Container Engine...
	Jan 28 19:38:35 old-k8s-version-182000 dockerd[621]: time="2023-01-28T19:38:35.523375038Z" level=info msg="Starting up"
	Jan 28 19:38:35 old-k8s-version-182000 dockerd[621]: time="2023-01-28T19:38:35.525046742Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jan 28 19:38:35 old-k8s-version-182000 dockerd[621]: time="2023-01-28T19:38:35.525083922Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jan 28 19:38:35 old-k8s-version-182000 dockerd[621]: time="2023-01-28T19:38:35.525106137Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Jan 28 19:38:35 old-k8s-version-182000 dockerd[621]: time="2023-01-28T19:38:35.525113757Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jan 28 19:38:35 old-k8s-version-182000 dockerd[621]: time="2023-01-28T19:38:35.526268026Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jan 28 19:38:35 old-k8s-version-182000 dockerd[621]: time="2023-01-28T19:38:35.526308462Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jan 28 19:38:35 old-k8s-version-182000 dockerd[621]: time="2023-01-28T19:38:35.526321446Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Jan 28 19:38:35 old-k8s-version-182000 dockerd[621]: time="2023-01-28T19:38:35.526328970Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jan 28 19:38:35 old-k8s-version-182000 dockerd[621]: time="2023-01-28T19:38:35.533397728Z" level=info msg="Loading containers: start."
	Jan 28 19:38:35 old-k8s-version-182000 dockerd[621]: time="2023-01-28T19:38:35.611043216Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jan 28 19:38:35 old-k8s-version-182000 dockerd[621]: time="2023-01-28T19:38:35.644702151Z" level=info msg="Loading containers: done."
	Jan 28 19:38:35 old-k8s-version-182000 dockerd[621]: time="2023-01-28T19:38:35.653097814Z" level=info msg="Docker daemon" commit=6051f14 graphdriver(s)=overlay2 version=20.10.23
	Jan 28 19:38:35 old-k8s-version-182000 dockerd[621]: time="2023-01-28T19:38:35.653195626Z" level=info msg="Daemon has completed initialization"
	Jan 28 19:38:35 old-k8s-version-182000 systemd[1]: Started Docker Application Container Engine.
	Jan 28 19:38:35 old-k8s-version-182000 dockerd[621]: time="2023-01-28T19:38:35.675039977Z" level=info msg="API listen on [::]:2376"
	Jan 28 19:38:35 old-k8s-version-182000 dockerd[621]: time="2023-01-28T19:38:35.681266325Z" level=info msg="API listen on /var/run/docker.sock"
	
	* 
	* ==> container status <==
	* time="2023-01-28T19:46:46Z" level=fatal msg="connect: connect endpoint 'unix:///var/run/dockershim.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> kernel <==
	*  19:46:47 up  2:45,  0 users,  load average: 0.03, 0.69, 1.20
	Linux old-k8s-version-182000 5.15.49-linuxkit #1 SMP Tue Sep 13 07:51:46 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.5 LTS"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Sat 2023-01-28 19:38:32 UTC, end at Sat 2023-01-28 19:46:47 UTC. --
	Jan 28 19:46:45 old-k8s-version-182000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jan 28 19:46:45 old-k8s-version-182000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 160.
	Jan 28 19:46:45 old-k8s-version-182000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jan 28 19:46:45 old-k8s-version-182000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jan 28 19:46:45 old-k8s-version-182000 kubelet[14751]: I0128 19:46:45.743001   14751 server.go:410] Version: v1.16.0
	Jan 28 19:46:45 old-k8s-version-182000 kubelet[14751]: I0128 19:46:45.743294   14751 plugins.go:100] No cloud provider specified.
	Jan 28 19:46:45 old-k8s-version-182000 kubelet[14751]: I0128 19:46:45.743333   14751 server.go:773] Client rotation is on, will bootstrap in background
	Jan 28 19:46:45 old-k8s-version-182000 kubelet[14751]: I0128 19:46:45.745168   14751 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jan 28 19:46:45 old-k8s-version-182000 kubelet[14751]: W0128 19:46:45.746063   14751 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Jan 28 19:46:45 old-k8s-version-182000 kubelet[14751]: W0128 19:46:45.746165   14751 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Jan 28 19:46:45 old-k8s-version-182000 kubelet[14751]: F0128 19:46:45.746192   14751 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Jan 28 19:46:45 old-k8s-version-182000 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jan 28 19:46:45 old-k8s-version-182000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jan 28 19:46:46 old-k8s-version-182000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 161.
	Jan 28 19:46:46 old-k8s-version-182000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jan 28 19:46:46 old-k8s-version-182000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jan 28 19:46:46 old-k8s-version-182000 kubelet[14763]: I0128 19:46:46.496411   14763 server.go:410] Version: v1.16.0
	Jan 28 19:46:46 old-k8s-version-182000 kubelet[14763]: I0128 19:46:46.496864   14763 plugins.go:100] No cloud provider specified.
	Jan 28 19:46:46 old-k8s-version-182000 kubelet[14763]: I0128 19:46:46.496903   14763 server.go:773] Client rotation is on, will bootstrap in background
	Jan 28 19:46:46 old-k8s-version-182000 kubelet[14763]: I0128 19:46:46.498777   14763 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jan 28 19:46:46 old-k8s-version-182000 kubelet[14763]: W0128 19:46:46.499612   14763 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Jan 28 19:46:46 old-k8s-version-182000 kubelet[14763]: W0128 19:46:46.499648   14763 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Jan 28 19:46:46 old-k8s-version-182000 kubelet[14763]: F0128 19:46:46.499673   14763 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Jan 28 19:46:46 old-k8s-version-182000 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jan 28 19:46:46 old-k8s-version-182000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0128 11:46:46.866729   45451 logs.go:193] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-182000 -n old-k8s-version-182000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-182000 -n old-k8s-version-182000: exit status 2 (410.743488ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-182000" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (496.37s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (575.7s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0128 11:46:53.059948   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/no-preload-337000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62983/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0128 11:47:03.475900   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/addons-582000/client.crt: no such file or directory
E0128 11:47:07.319216   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/functional-251000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62983/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0128 11:47:09.281516   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/kubenet-732000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62983/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62983/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62983/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62983/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62983/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0128 11:48:05.557477   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/custom-flannel-732000/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62983/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0128 11:48:14.982030   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/no-preload-337000/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62983/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0128 11:48:33.542443   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/skaffold-054000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62983/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0128 11:48:37.856035   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/calico-732000/client.crt: no such file or directory
E0128 11:48:46.625061   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/kindnet-732000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62983/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62983/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62983/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62983/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62983/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0128 11:49:41.553999   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/false-732000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62983/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62983/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0128 11:50:01.611249   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/flannel-732000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62983/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0128 11:50:06.812937   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/auto-732000/client.crt: no such file or directory
E0128 11:50:09.670028   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/kindnet-732000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62983/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62983/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0128 11:50:31.133665   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/no-preload-337000/client.crt: no such file or directory
E0128 11:50:32.969870   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/enable-default-cni-732000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62983/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62983/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0128 11:50:58.823418   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/no-preload-337000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62983/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62983/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62983/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0128 11:51:24.655380   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/flannel-732000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62983/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0128 11:51:36.597235   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/skaffold-054000/client.crt: no such file or directory
E0128 11:51:39.816243   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/bridge-732000/client.crt: no such file or directory
E0128 11:51:42.516595   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/custom-flannel-732000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62983/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62983/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0128 11:51:56.014476   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/enable-default-cni-732000/client.crt: no such file or directory
E0128 11:52:03.476569   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/addons-582000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62983/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0128 11:52:07.319117   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/functional-251000/client.crt: no such file or directory
E0128 11:52:09.283920   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/kubenet-732000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62983/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62983/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62983/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62983/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62983/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0128 11:53:02.865757   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/bridge-732000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62983/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0128 11:53:18.506290   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/false-732000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62983/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0128 11:53:26.609058   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/addons-582000/client.crt: no such file or directory
E0128 11:53:32.329368   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/kubenet-732000/client.crt: no such file or directory
E0128 11:53:33.545220   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/skaffold-054000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62983/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0128 11:53:37.855669   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/calico-732000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62983/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0128 11:53:46.624404   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/kindnet-732000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62983/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62983/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62983/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62983/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62983/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62983/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62983/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0128 11:55:06.813619   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/auto-732000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62983/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62983/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0128 11:55:31.134649   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/no-preload-337000/client.crt: no such file or directory
E0128 11:55:32.970598   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/enable-default-cni-732000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62983/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62983/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62983/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62983/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62983/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: timed out waiting for the condition ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-182000 -n old-k8s-version-182000
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-182000 -n old-k8s-version-182000: exit status 2 (583.844622ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "old-k8s-version-182000" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: timed out waiting for the condition
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-182000

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:235: (dbg) docker inspect old-k8s-version-182000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "617ab90eb0df8d30a74a5dc57ab16039882655d65e3025362322b78aeec379ac",
	        "Created": "2023-01-28T19:32:55.313551858Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 692432,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-01-28T19:38:32.66261959Z",
	            "FinishedAt": "2023-01-28T19:38:29.825307287Z"
	        },
	        "Image": "sha256:01c0ce65fff70ab1f019aa14679c46b23331bd108ae899438e589673efaa9c00",
	        "ResolvConfPath": "/var/lib/docker/containers/617ab90eb0df8d30a74a5dc57ab16039882655d65e3025362322b78aeec379ac/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/617ab90eb0df8d30a74a5dc57ab16039882655d65e3025362322b78aeec379ac/hostname",
	        "HostsPath": "/var/lib/docker/containers/617ab90eb0df8d30a74a5dc57ab16039882655d65e3025362322b78aeec379ac/hosts",
	        "LogPath": "/var/lib/docker/containers/617ab90eb0df8d30a74a5dc57ab16039882655d65e3025362322b78aeec379ac/617ab90eb0df8d30a74a5dc57ab16039882655d65e3025362322b78aeec379ac-json.log",
	        "Name": "/old-k8s-version-182000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-182000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-182000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/cf358ab04dbdc1e02cf2f4479b1d8fec29dc7884849cab2c09927d672777c18a-init/diff:/var/lib/docker/overlay2/ebc03c916d1215717cc928cc2ae6bb5febcaf1787682b19b31688cb58ea354df/diff:/var/lib/docker/overlay2/aaa47387c6297b9482eaf2d8291628b9713643f21d066c37435b7e2cb9493e2a/diff:/var/lib/docker/overlay2/f4b2c82f60338b3f859441322400906b78ab112321f53e01c52ec81f29b4b492/diff:/var/lib/docker/overlay2/9425b655d46ca09e43b6484556a0c42b69e0c7947e14ec530546a61f36d3b950/diff:/var/lib/docker/overlay2/7d54571f62200ad4404fb9bb52649136f53eb6d6eedc5a51b22898df9001c1d4/diff:/var/lib/docker/overlay2/a4b4864baac235070d93e0940d897dd3006e6a93d705490108451f8d00ba148f/diff:/var/lib/docker/overlay2/8b092a30ffaf1c9230cef4864afb85d91ceb9fa92e484e3ebf7a31bb7df915bc/diff:/var/lib/docker/overlay2/96ac23e2e494a92e2287115c1a85e160e67543832baaaa3fa9a2351b370d5bd4/diff:/var/lib/docker/overlay2/c1e68f2d6c4ce95b33833a8d750a79aeaef16cc7d0a556369a63014eef7597b6/diff:/var/lib/docker/overlay2/89b3fe
fdd4bd8243826ccca31dec1aef9f91ad82adda108147b89c096792dfa5/diff:/var/lib/docker/overlay2/0b09be009751a25e4cbe64835151f1a814c4547d2c513994ae82f8093a22040d/diff:/var/lib/docker/overlay2/dc9a2b1667d67c8f0269966ef8862a4ffcfe4b68ad45f12e3ff27075c595c716/diff:/var/lib/docker/overlay2/d41ab03c6154f92111515bffc37c1d75570fa697ffa380631216096b52bfbc1b/diff:/var/lib/docker/overlay2/549b2cfc0a7d4f81f8d2624b1b2069b66d159ecd7b38148b476bb7a1b9e29100/diff:/var/lib/docker/overlay2/ecd7a1e2ce66c77afcf87a94383f14763eca5c8732c76b1b83765a278db91228/diff:/var/lib/docker/overlay2/6361f06734d312adc4271443765c435c4a7600356d1c6597fb7fa440cf1a2eb4/diff:/var/lib/docker/overlay2/cc7751a853d09ad130dccc1c835daa64e6ba830331636aca6a2a98da95ab52c1/diff:/var/lib/docker/overlay2/6612588f68e64e123a6e5cf6f6da339ee6072f8054f936be6d4f799d6c683e75/diff:/var/lib/docker/overlay2/673e42d3b5998d60bbb5c7c40da29902c3ea35068701966a7e3fd8a923d4a37a/diff:/var/lib/docker/overlay2/115d8a9e167d9b574c1d945d85d46da3ad2688595502524702976fc9b1051464/diff:/var/lib/d
ocker/overlay2/a8a2380c37eec6348eac27c7ee660b1f1d1ef94786cd68f197218066d99d80dd/diff:/var/lib/docker/overlay2/9261c5669bb687df6f9ad1ac00615cdf03b913ab9b3e1ca1a1f1cb6420702325/diff:/var/lib/docker/overlay2/46213bfa914da7941cec1c2c32185400a83c35a74274f39d74ad203ee5688535/diff:/var/lib/docker/overlay2/45ce48252aa0eeb54f2a1c27e570f8e85ac4a1d28a947b81618e608c64e3a700/diff:/var/lib/docker/overlay2/5631fae0fb00254444e3cc059b8b6062ee02fd66eefdf043970883f6724ce682/diff:/var/lib/docker/overlay2/e23ece345ff4dee7248a8e8cbd15cdbaef319d286a6490377fc337feecd6be04/diff:/var/lib/docker/overlay2/004bedb9de21965ae003d62b64a9e6506a10afa328b9af469eb51d3920d9c3b6/diff:/var/lib/docker/overlay2/c0ed692b610507b4315c2a43e64bd682bfdae35a7b4bcba499bba9cfb33121c4/diff:/var/lib/docker/overlay2/8396057830d1ed01256a5ee803b6310c8bf4c6ef3fb0f958240557352a12f3db/diff:/var/lib/docker/overlay2/c8024a29733fe87d5aad124df5ff33e97bcca94ee9fee196a6d51c9474692733/diff:/var/lib/docker/overlay2/9e59b455e481cdabd17790daddef6872e7b6452d1e8de1526998d92ab5f
c008f/diff:/var/lib/docker/overlay2/88cc3ecb1b979acbac3227fd30f3e879629eff2b47f416b3069463900f3e40e0/diff:/var/lib/docker/overlay2/5ef1713ef4e296c4637ccd2823c2b80cb5c53cd757947ff3fc17b7dd2d2dd21c/diff:/var/lib/docker/overlay2/17a697eb9c335b2a20567e3615e2222a113542532402dc62978ff64d65860c5e/diff:/var/lib/docker/overlay2/69e01a154090c42cbf63b88c7e922d483dd2d393fbab64725f79b3ff3800c3c1/diff:/var/lib/docker/overlay2/6ed77ee7b45230567431b0cbfb9cefedfd3f3d7eecf271f20a711bbcc4fdb1b3/diff:/var/lib/docker/overlay2/3bf095c6d6fe582e91d9a9ab0dc5b4d168f93f28ec2488a88f60b63ebf1e22f7/diff:/var/lib/docker/overlay2/cfc3bbbdc2702c8d23d146885b4da1a4482e8af461b5c87426fab855f97417a0/diff:/var/lib/docker/overlay2/1c4944ff8930ced790954d78530aeaf94eeb6c7367b474bdfbad30345cc1276a/diff:/var/lib/docker/overlay2/44cf435555d16eb68c4149bc53e4ae11797c7ddb429332f3d0d36328cb16ea5f/diff:/var/lib/docker/overlay2/4a7b4287594c4da981df984cd6e3910778bfdff2b5560a03d6cdcb589790c8e5/diff:/var/lib/docker/overlay2/76c287aa1bd3a7c3636e82df1bac8ead485e55
7a0fd68fdbfc0d5655d89f7113/diff:/var/lib/docker/overlay2/a2ab65056651b30980d6df9664f682519df2c2fc604d87ddb2bb2ca25b663d5e/diff:/var/lib/docker/overlay2/3a84daa5ad43dd7c27d884672613e37b8a5bed1fa79edee0e951b2e3fa39f21f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cf358ab04dbdc1e02cf2f4479b1d8fec29dc7884849cab2c09927d672777c18a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cf358ab04dbdc1e02cf2f4479b1d8fec29dc7884849cab2c09927d672777c18a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cf358ab04dbdc1e02cf2f4479b1d8fec29dc7884849cab2c09927d672777c18a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-182000",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-182000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-182000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-182000",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-182000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a866b115da76d1500e5c6ec1c87955e1bc3fb30a0609eeb66b3f8fe1f7fa2c1a",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62979"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62980"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62981"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62982"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62983"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/a866b115da76",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-182000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "617ab90eb0df",
	                        "old-k8s-version-182000"
	                    ],
	                    "NetworkID": "56bfdf73bec9b0196848fd6c701661b6f09d89a5213236097da597daf246c910",
	                    "EndpointID": "ded8251749a3d30dcda48b4492f2a9fb69f5ae5dd7d576b06c81313cb7eb59b8",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-182000 -n old-k8s-version-182000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-182000 -n old-k8s-version-182000: exit status 2 (500.019328ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p old-k8s-version-182000 logs -n 25

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p old-k8s-version-182000 logs -n 25: (3.925778039s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------|------------------------------|---------|--------------------------|---------------------|---------------------|
	| Command |                         Args                         |           Profile            |  User   |         Version          |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|------------------------------|---------|--------------------------|---------------------|---------------------|
	| start   | -p embed-certs-384000                                | embed-certs-384000           | jenkins | v1.29.0-1674856271-15565 | 28 Jan 23 11:42 PST | 28 Jan 23 11:47 PST |
	|         | --memory=2200                                        |                              |         |                          |                     |                     |
	|         | --alsologtostderr --wait=true                        |                              |         |                          |                     |                     |
	|         | --embed-certs --driver=docker                        |                              |         |                          |                     |                     |
	|         | --kubernetes-version=v1.26.1                         |                              |         |                          |                     |                     |
	| ssh     | -p embed-certs-384000 sudo                           | embed-certs-384000           | jenkins | v1.29.0-1674856271-15565 | 28 Jan 23 11:48 PST | 28 Jan 23 11:48 PST |
	|         | crictl images -o json                                |                              |         |                          |                     |                     |
	| pause   | -p embed-certs-384000                                | embed-certs-384000           | jenkins | v1.29.0-1674856271-15565 | 28 Jan 23 11:48 PST | 28 Jan 23 11:48 PST |
	|         | --alsologtostderr -v=1                               |                              |         |                          |                     |                     |
	| unpause | -p embed-certs-384000                                | embed-certs-384000           | jenkins | v1.29.0-1674856271-15565 | 28 Jan 23 11:48 PST | 28 Jan 23 11:48 PST |
	|         | --alsologtostderr -v=1                               |                              |         |                          |                     |                     |
	| delete  | -p embed-certs-384000                                | embed-certs-384000           | jenkins | v1.29.0-1674856271-15565 | 28 Jan 23 11:48 PST | 28 Jan 23 11:48 PST |
	| delete  | -p embed-certs-384000                                | embed-certs-384000           | jenkins | v1.29.0-1674856271-15565 | 28 Jan 23 11:48 PST | 28 Jan 23 11:48 PST |
	| delete  | -p                                                   | disable-driver-mounts-244000 | jenkins | v1.29.0-1674856271-15565 | 28 Jan 23 11:48 PST | 28 Jan 23 11:48 PST |
	|         | disable-driver-mounts-244000                         |                              |         |                          |                     |                     |
	| start   | -p                                                   | default-k8s-diff-port-404000 | jenkins | v1.29.0-1674856271-15565 | 28 Jan 23 11:48 PST | 28 Jan 23 11:49 PST |
	|         | default-k8s-diff-port-404000                         |                              |         |                          |                     |                     |
	|         | --memory=2200                                        |                              |         |                          |                     |                     |
	|         | --alsologtostderr --wait=true                        |                              |         |                          |                     |                     |
	|         | --apiserver-port=8444                                |                              |         |                          |                     |                     |
	|         | --driver=docker                                      |                              |         |                          |                     |                     |
	|         | --kubernetes-version=v1.26.1                         |                              |         |                          |                     |                     |
	| addons  | enable metrics-server -p                             | default-k8s-diff-port-404000 | jenkins | v1.29.0-1674856271-15565 | 28 Jan 23 11:49 PST | 28 Jan 23 11:49 PST |
	|         | default-k8s-diff-port-404000                         |                              |         |                          |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4     |                              |         |                          |                     |                     |
	|         | --registries=MetricsServer=fake.domain               |                              |         |                          |                     |                     |
	| stop    | -p                                                   | default-k8s-diff-port-404000 | jenkins | v1.29.0-1674856271-15565 | 28 Jan 23 11:49 PST | 28 Jan 23 11:49 PST |
	|         | default-k8s-diff-port-404000                         |                              |         |                          |                     |                     |
	|         | --alsologtostderr -v=3                               |                              |         |                          |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-404000     | default-k8s-diff-port-404000 | jenkins | v1.29.0-1674856271-15565 | 28 Jan 23 11:49 PST | 28 Jan 23 11:49 PST |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4    |                              |         |                          |                     |                     |
	| start   | -p                                                   | default-k8s-diff-port-404000 | jenkins | v1.29.0-1674856271-15565 | 28 Jan 23 11:49 PST | 28 Jan 23 11:54 PST |
	|         | default-k8s-diff-port-404000                         |                              |         |                          |                     |                     |
	|         | --memory=2200                                        |                              |         |                          |                     |                     |
	|         | --alsologtostderr --wait=true                        |                              |         |                          |                     |                     |
	|         | --apiserver-port=8444                                |                              |         |                          |                     |                     |
	|         | --driver=docker                                      |                              |         |                          |                     |                     |
	|         | --kubernetes-version=v1.26.1                         |                              |         |                          |                     |                     |
	| ssh     | -p                                                   | default-k8s-diff-port-404000 | jenkins | v1.29.0-1674856271-15565 | 28 Jan 23 11:54 PST | 28 Jan 23 11:54 PST |
	|         | default-k8s-diff-port-404000                         |                              |         |                          |                     |                     |
	|         | sudo crictl images -o json                           |                              |         |                          |                     |                     |
	| pause   | -p                                                   | default-k8s-diff-port-404000 | jenkins | v1.29.0-1674856271-15565 | 28 Jan 23 11:54 PST | 28 Jan 23 11:54 PST |
	|         | default-k8s-diff-port-404000                         |                              |         |                          |                     |                     |
	|         | --alsologtostderr -v=1                               |                              |         |                          |                     |                     |
	| unpause | -p                                                   | default-k8s-diff-port-404000 | jenkins | v1.29.0-1674856271-15565 | 28 Jan 23 11:54 PST | 28 Jan 23 11:54 PST |
	|         | default-k8s-diff-port-404000                         |                              |         |                          |                     |                     |
	|         | --alsologtostderr -v=1                               |                              |         |                          |                     |                     |
	| delete  | -p                                                   | default-k8s-diff-port-404000 | jenkins | v1.29.0-1674856271-15565 | 28 Jan 23 11:54 PST | 28 Jan 23 11:54 PST |
	|         | default-k8s-diff-port-404000                         |                              |         |                          |                     |                     |
	| delete  | -p                                                   | default-k8s-diff-port-404000 | jenkins | v1.29.0-1674856271-15565 | 28 Jan 23 11:54 PST | 28 Jan 23 11:54 PST |
	|         | default-k8s-diff-port-404000                         |                              |         |                          |                     |                     |
	| start   | -p newest-cni-573000 --memory=2200 --alsologtostderr | newest-cni-573000            | jenkins | v1.29.0-1674856271-15565 | 28 Jan 23 11:54 PST | 28 Jan 23 11:55 PST |
	|         | --wait=apiserver,system_pods,default_sa              |                              |         |                          |                     |                     |
	|         | --feature-gates ServerSideApply=true                 |                              |         |                          |                     |                     |
	|         | --network-plugin=cni                                 |                              |         |                          |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 |                              |         |                          |                     |                     |
	|         | --driver=docker  --kubernetes-version=v1.26.1        |                              |         |                          |                     |                     |
	| addons  | enable metrics-server -p newest-cni-573000           | newest-cni-573000            | jenkins | v1.29.0-1674856271-15565 | 28 Jan 23 11:55 PST | 28 Jan 23 11:55 PST |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4     |                              |         |                          |                     |                     |
	|         | --registries=MetricsServer=fake.domain               |                              |         |                          |                     |                     |
	| stop    | -p newest-cni-573000                                 | newest-cni-573000            | jenkins | v1.29.0-1674856271-15565 | 28 Jan 23 11:55 PST | 28 Jan 23 11:55 PST |
	|         | --alsologtostderr -v=3                               |                              |         |                          |                     |                     |
	| addons  | enable dashboard -p newest-cni-573000                | newest-cni-573000            | jenkins | v1.29.0-1674856271-15565 | 28 Jan 23 11:55 PST | 28 Jan 23 11:55 PST |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4    |                              |         |                          |                     |                     |
	| start   | -p newest-cni-573000 --memory=2200 --alsologtostderr | newest-cni-573000            | jenkins | v1.29.0-1674856271-15565 | 28 Jan 23 11:55 PST | 28 Jan 23 11:56 PST |
	|         | --wait=apiserver,system_pods,default_sa              |                              |         |                          |                     |                     |
	|         | --feature-gates ServerSideApply=true                 |                              |         |                          |                     |                     |
	|         | --network-plugin=cni                                 |                              |         |                          |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 |                              |         |                          |                     |                     |
	|         | --driver=docker  --kubernetes-version=v1.26.1        |                              |         |                          |                     |                     |
	| ssh     | -p newest-cni-573000 sudo                            | newest-cni-573000            | jenkins | v1.29.0-1674856271-15565 | 28 Jan 23 11:56 PST | 28 Jan 23 11:56 PST |
	|         | crictl images -o json                                |                              |         |                          |                     |                     |
	| pause   | -p newest-cni-573000                                 | newest-cni-573000            | jenkins | v1.29.0-1674856271-15565 | 28 Jan 23 11:56 PST | 28 Jan 23 11:56 PST |
	|         | --alsologtostderr -v=1                               |                              |         |                          |                     |                     |
	| unpause | -p newest-cni-573000                                 | newest-cni-573000            | jenkins | v1.29.0-1674856271-15565 | 28 Jan 23 11:56 PST | 28 Jan 23 11:56 PST |
	|         | --alsologtostderr -v=1                               |                              |         |                          |                     |                     |
	|---------|------------------------------------------------------|------------------------------|---------|--------------------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/01/28 11:55:44
	Running on machine: MacOS-Agent-3
	Binary: Built with gc go1.19.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0128 11:55:44.585656   46693 out.go:296] Setting OutFile to fd 1 ...
	I0128 11:55:44.585825   46693 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0128 11:55:44.585831   46693 out.go:309] Setting ErrFile to fd 2...
	I0128 11:55:44.585835   46693 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0128 11:55:44.585947   46693 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15565-24808/.minikube/bin
	I0128 11:55:44.586473   46693 out.go:303] Setting JSON to false
	I0128 11:55:44.604636   46693 start.go:125] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":10519,"bootTime":1674925225,"procs":399,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0128 11:55:44.604714   46693 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0128 11:55:44.627086   46693 out.go:177] * [newest-cni-573000] minikube v1.29.0-1674856271-15565 on Darwin 13.2
	I0128 11:55:44.648789   46693 notify.go:220] Checking for updates...
	I0128 11:55:44.670698   46693 out.go:177]   - MINIKUBE_LOCATION=15565
	I0128 11:55:44.713558   46693 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15565-24808/kubeconfig
	I0128 11:55:44.755584   46693 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0128 11:55:44.797390   46693 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0128 11:55:44.839584   46693 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-24808/.minikube
	I0128 11:55:44.860698   46693 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0128 11:55:44.882367   46693 config.go:180] Loaded profile config "newest-cni-573000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0128 11:55:44.883036   46693 driver.go:365] Setting default libvirt URI to qemu:///system
	I0128 11:55:44.944883   46693 docker.go:141] docker version: linux-20.10.22:Docker Desktop 4.16.2 (95914)
	I0128 11:55:44.945016   46693 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0128 11:55:45.090681   46693 info.go:266] docker info: {ID:O4T2:OPZV:SRVN:GAA5:IEOW:TFNR:UPTT:2CKV:GY3E:3NIN:VS7C:3BCB Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:false NGoroutines:57 SystemTime:2023-01-28 19:55:44.995945665 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0128 11:55:45.134609   46693 out.go:177] * Using the docker driver based on existing profile
	I0128 11:55:45.155594   46693 start.go:296] selected driver: docker
	I0128 11:55:45.155626   46693 start.go:857] validating driver "docker" against &{Name:newest-cni-573000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:newest-cni-573000 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequeste
d:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0128 11:55:45.155795   46693 start.go:868] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0128 11:55:45.159644   46693 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0128 11:55:45.300837   46693 info.go:266] docker info: {ID:O4T2:OPZV:SRVN:GAA5:IEOW:TFNR:UPTT:2CKV:GY3E:3NIN:VS7C:3BCB Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:false NGoroutines:57 SystemTime:2023-01-28 19:55:45.209146798 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0128 11:55:45.300994   46693 start_flags.go:936] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0128 11:55:45.301012   46693 cni.go:84] Creating CNI manager for ""
	I0128 11:55:45.301024   46693 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0128 11:55:45.301037   46693 start_flags.go:319] config:
	{Name:newest-cni-573000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:newest-cni-573000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: Networ
kPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/m
inikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0128 11:55:45.344644   46693 out.go:177] * Starting control plane node newest-cni-573000 in cluster newest-cni-573000
	I0128 11:55:45.366462   46693 cache.go:120] Beginning downloading kic base image for docker with docker
	I0128 11:55:45.387608   46693 out.go:177] * Pulling base image ...
	I0128 11:55:45.429673   46693 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0128 11:55:45.429721   46693 image.go:77] Checking for gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 in local docker daemon
	I0128 11:55:45.429773   46693 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15565-24808/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4
	I0128 11:55:45.429796   46693 cache.go:57] Caching tarball of preloaded images
	I0128 11:55:45.429990   46693 preload.go:174] Found /Users/jenkins/minikube-integration/15565-24808/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0128 11:55:45.430012   46693 cache.go:60] Finished verifying existence of preloaded tar for  v1.26.1 on docker
	I0128 11:55:45.431150   46693 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/newest-cni-573000/config.json ...
	I0128 11:55:45.486764   46693 image.go:81] Found gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 in local docker daemon, skipping pull
	I0128 11:55:45.486777   46693 cache.go:143] gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 exists in daemon, skipping load
	I0128 11:55:45.486795   46693 cache.go:193] Successfully downloaded all kic artifacts
	I0128 11:55:45.486833   46693 start.go:364] acquiring machines lock for newest-cni-573000: {Name:mk74b458fad51dc514dc72a8b30af124951b5ffc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0128 11:55:45.486916   46693 start.go:368] acquired machines lock for "newest-cni-573000" in 65.467µs
	I0128 11:55:45.486947   46693 start.go:96] Skipping create...Using existing machine configuration
	I0128 11:55:45.486957   46693 fix.go:55] fixHost starting: 
	I0128 11:55:45.487175   46693 cli_runner.go:164] Run: docker container inspect newest-cni-573000 --format={{.State.Status}}
	I0128 11:55:45.543684   46693 fix.go:103] recreateIfNeeded on newest-cni-573000: state=Stopped err=<nil>
	W0128 11:55:45.543714   46693 fix.go:129] unexpected machine state, will restart: <nil>
	I0128 11:55:45.565699   46693 out.go:177] * Restarting existing docker container for "newest-cni-573000" ...
	I0128 11:55:45.587436   46693 cli_runner.go:164] Run: docker start newest-cni-573000
	I0128 11:55:45.925304   46693 cli_runner.go:164] Run: docker container inspect newest-cni-573000 --format={{.State.Status}}
	I0128 11:55:45.988749   46693 kic.go:426] container "newest-cni-573000" state is running.
	I0128 11:55:45.989719   46693 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-573000
	I0128 11:55:46.066529   46693 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/newest-cni-573000/config.json ...
	I0128 11:55:46.066967   46693 machine.go:88] provisioning docker machine ...
	I0128 11:55:46.066991   46693 ubuntu.go:169] provisioning hostname "newest-cni-573000"
	I0128 11:55:46.067065   46693 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-573000
	I0128 11:55:46.135082   46693 main.go:141] libmachine: Using SSH client type: native
	I0128 11:55:46.135346   46693 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 64053 <nil> <nil>}
	I0128 11:55:46.135361   46693 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-573000 && echo "newest-cni-573000" | sudo tee /etc/hostname
	I0128 11:55:46.277967   46693 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-573000
	
	I0128 11:55:46.278066   46693 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-573000
	I0128 11:55:46.339940   46693 main.go:141] libmachine: Using SSH client type: native
	I0128 11:55:46.340093   46693 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 64053 <nil> <nil>}
	I0128 11:55:46.340107   46693 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-573000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-573000/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-573000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0128 11:55:46.473851   46693 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0128 11:55:46.473870   46693 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15565-24808/.minikube CaCertPath:/Users/jenkins/minikube-integration/15565-24808/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15565-24808/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15565-24808/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15565-24808/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15565-24808/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15565-24808/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15565-24808/.minikube}
	I0128 11:55:46.473893   46693 ubuntu.go:177] setting up certificates
	I0128 11:55:46.473901   46693 provision.go:83] configureAuth start
	I0128 11:55:46.473976   46693 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-573000
	I0128 11:55:46.533280   46693 provision.go:138] copyHostCerts
	I0128 11:55:46.533380   46693 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-24808/.minikube/ca.pem, removing ...
	I0128 11:55:46.533389   46693 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-24808/.minikube/ca.pem
	I0128 11:55:46.533497   46693 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-24808/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15565-24808/.minikube/ca.pem (1082 bytes)
	I0128 11:55:46.533703   46693 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-24808/.minikube/cert.pem, removing ...
	I0128 11:55:46.533710   46693 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-24808/.minikube/cert.pem
	I0128 11:55:46.533776   46693 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-24808/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15565-24808/.minikube/cert.pem (1123 bytes)
	I0128 11:55:46.533943   46693 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-24808/.minikube/key.pem, removing ...
	I0128 11:55:46.533950   46693 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-24808/.minikube/key.pem
	I0128 11:55:46.534021   46693 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-24808/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15565-24808/.minikube/key.pem (1675 bytes)
	I0128 11:55:46.534152   46693 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15565-24808/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15565-24808/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15565-24808/.minikube/certs/ca-key.pem org=jenkins.newest-cni-573000 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube newest-cni-573000]
	I0128 11:55:46.572870   46693 provision.go:172] copyRemoteCerts
	I0128 11:55:46.572923   46693 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0128 11:55:46.572977   46693 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-573000
	I0128 11:55:46.630419   46693 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:64053 SSHKeyPath:/Users/jenkins/minikube-integration/15565-24808/.minikube/machines/newest-cni-573000/id_rsa Username:docker}
	I0128 11:55:46.723336   46693 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-24808/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0128 11:55:46.741079   46693 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-24808/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0128 11:55:46.758556   46693 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-24808/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0128 11:55:46.776554   46693 provision.go:86] duration metric: configureAuth took 302.641005ms
	I0128 11:55:46.776578   46693 ubuntu.go:193] setting minikube options for container-runtime
	I0128 11:55:46.776752   46693 config.go:180] Loaded profile config "newest-cni-573000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0128 11:55:46.776822   46693 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-573000
	I0128 11:55:46.834008   46693 main.go:141] libmachine: Using SSH client type: native
	I0128 11:55:46.834160   46693 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 64053 <nil> <nil>}
	I0128 11:55:46.834169   46693 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0128 11:55:46.966925   46693 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0128 11:55:46.966940   46693 ubuntu.go:71] root file system type: overlay
	I0128 11:55:46.967145   46693 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0128 11:55:46.967235   46693 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-573000
	I0128 11:55:47.025909   46693 main.go:141] libmachine: Using SSH client type: native
	I0128 11:55:47.026066   46693 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 64053 <nil> <nil>}
	I0128 11:55:47.026119   46693 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0128 11:55:47.168199   46693 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0128 11:55:47.168312   46693 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-573000
	I0128 11:55:47.226664   46693 main.go:141] libmachine: Using SSH client type: native
	I0128 11:55:47.226817   46693 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 64053 <nil> <nil>}
	I0128 11:55:47.226831   46693 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0128 11:55:47.363667   46693 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0128 11:55:47.363686   46693 machine.go:91] provisioned docker machine in 1.296707075s
	I0128 11:55:47.363695   46693 start.go:300] post-start starting for "newest-cni-573000" (driver="docker")
	I0128 11:55:47.363702   46693 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0128 11:55:47.363792   46693 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0128 11:55:47.363845   46693 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-573000
	I0128 11:55:47.421259   46693 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:64053 SSHKeyPath:/Users/jenkins/minikube-integration/15565-24808/.minikube/machines/newest-cni-573000/id_rsa Username:docker}
	I0128 11:55:47.514928   46693 ssh_runner.go:195] Run: cat /etc/os-release
	I0128 11:55:47.518600   46693 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0128 11:55:47.518624   46693 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0128 11:55:47.518631   46693 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0128 11:55:47.518636   46693 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0128 11:55:47.518643   46693 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15565-24808/.minikube/addons for local assets ...
	I0128 11:55:47.518742   46693 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15565-24808/.minikube/files for local assets ...
	I0128 11:55:47.518895   46693 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15565-24808/.minikube/files/etc/ssl/certs/259822.pem -> 259822.pem in /etc/ssl/certs
	I0128 11:55:47.519080   46693 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0128 11:55:47.526594   46693 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-24808/.minikube/files/etc/ssl/certs/259822.pem --> /etc/ssl/certs/259822.pem (1708 bytes)
	I0128 11:55:47.543871   46693 start.go:303] post-start completed in 180.16471ms
	I0128 11:55:47.543949   46693 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0128 11:55:47.544030   46693 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-573000
	I0128 11:55:47.601768   46693 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:64053 SSHKeyPath:/Users/jenkins/minikube-integration/15565-24808/.minikube/machines/newest-cni-573000/id_rsa Username:docker}
	I0128 11:55:47.692171   46693 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0128 11:55:47.696892   46693 fix.go:57] fixHost completed within 2.209929524s
	I0128 11:55:47.696906   46693 start.go:83] releasing machines lock for "newest-cni-573000", held for 2.209977952s
	I0128 11:55:47.696996   46693 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-573000
	I0128 11:55:47.753177   46693 ssh_runner.go:195] Run: cat /version.json
	I0128 11:55:47.753187   46693 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0128 11:55:47.753252   46693 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-573000
	I0128 11:55:47.753258   46693 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-573000
	I0128 11:55:47.814636   46693 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:64053 SSHKeyPath:/Users/jenkins/minikube-integration/15565-24808/.minikube/machines/newest-cni-573000/id_rsa Username:docker}
	I0128 11:55:47.814811   46693 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:64053 SSHKeyPath:/Users/jenkins/minikube-integration/15565-24808/.minikube/machines/newest-cni-573000/id_rsa Username:docker}
	W0128 11:55:47.906879   46693 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.29.0 -> Actual minikube version: v1.29.0-1674856271-15565
	I0128 11:55:47.906959   46693 ssh_runner.go:195] Run: systemctl --version
	I0128 11:55:52.924679   46693 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.171460964s)
	W0128 11:55:52.924708   46693 start.go:833] [curl -sS -m 2 https://registry.k8s.io/] failed: curl -sS -m 2 https://registry.k8s.io/: Process exited with status 28
	stdout:
	
	stderr:
	curl: (28) Resolving timed out after 2000 milliseconds
	I0128 11:55:52.924752   46693 ssh_runner.go:235] Completed: systemctl --version: (5.017755945s)
	W0128 11:55:52.924803   46693 out.go:239] ! This container is having trouble accessing https://registry.k8s.io
	W0128 11:55:52.924810   46693 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0128 11:55:52.924834   46693 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0128 11:55:52.929869   46693 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0128 11:55:52.945747   46693 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0128 11:55:52.945865   46693 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0128 11:55:52.953494   46693 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
	I0128 11:55:52.966656   46693 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0128 11:55:52.974615   46693 cni.go:258] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0128 11:55:52.974633   46693 start.go:483] detecting cgroup driver to use...
	I0128 11:55:52.974645   46693 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0128 11:55:52.974730   46693 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0128 11:55:52.988070   46693 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0128 11:55:52.996976   46693 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0128 11:55:53.005536   46693 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0128 11:55:53.005591   46693 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0128 11:55:53.014011   46693 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0128 11:55:53.022673   46693 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0128 11:55:53.031004   46693 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0128 11:55:53.039514   46693 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0128 11:55:53.047444   46693 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0128 11:55:53.055800   46693 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0128 11:55:53.062918   46693 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0128 11:55:53.069927   46693 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0128 11:55:53.138075   46693 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0128 11:55:53.214367   46693 start.go:483] detecting cgroup driver to use...
	I0128 11:55:53.214397   46693 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0128 11:55:53.214476   46693 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0128 11:55:53.227860   46693 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0128 11:55:53.227935   46693 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0128 11:55:53.239668   46693 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0128 11:55:53.256390   46693 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0128 11:55:53.361505   46693 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0128 11:55:53.456602   46693 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0128 11:55:53.456617   46693 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0128 11:55:53.469820   46693 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0128 11:55:53.554340   46693 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0128 11:55:53.795282   46693 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0128 11:55:53.873250   46693 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0128 11:55:53.932290   46693 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0128 11:55:54.002176   46693 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0128 11:55:54.077212   46693 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0128 11:55:54.089017   46693 start.go:530] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0128 11:55:54.089098   46693 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0128 11:55:54.093067   46693 start.go:551] Will wait 60s for crictl version
	I0128 11:55:54.093117   46693 ssh_runner.go:195] Run: which crictl
	I0128 11:55:54.096657   46693 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0128 11:55:54.197428   46693 start.go:567] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.23
	RuntimeApiVersion:  v1alpha2
	I0128 11:55:54.197515   46693 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0128 11:55:54.226602   46693 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0128 11:55:54.299415   46693 out.go:204] * Preparing Kubernetes v1.26.1 on Docker 20.10.23 ...
	I0128 11:55:54.299635   46693 cli_runner.go:164] Run: docker exec -t newest-cni-573000 dig +short host.docker.internal
	I0128 11:55:54.416524   46693 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0128 11:55:54.416641   46693 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0128 11:55:54.421025   46693 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0128 11:55:54.430979   46693 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-573000
	I0128 11:55:54.510813   46693 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0128 11:55:54.532767   46693 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0128 11:55:54.532934   46693 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0128 11:55:54.559249   46693 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	registry.k8s.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0128 11:55:54.559277   46693 docker.go:560] Images already preloaded, skipping extraction
	I0128 11:55:54.559362   46693 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0128 11:55:54.583834   46693 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	registry.k8s.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0128 11:55:54.583858   46693 cache_images.go:84] Images are preloaded, skipping loading
	I0128 11:55:54.583951   46693 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0128 11:55:54.654053   46693 cni.go:84] Creating CNI manager for ""
	I0128 11:55:54.654070   46693 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0128 11:55:54.654091   46693 kubeadm.go:87] Using pod CIDR: 10.42.0.0/16
	I0128 11:55:54.654115   46693 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.26.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-573000 NodeName:newest-cni-573000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArgs:map[
] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0128 11:55:54.654248   46693 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "newest-cni-573000"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.26.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0128 11:55:54.654341   46693 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.26.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --feature-gates=ServerSideApply=true --hostname-override=newest-cni-573000 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.26.1 ClusterName:newest-cni-573000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0128 11:55:54.654419   46693 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.1
	I0128 11:55:54.662165   46693 binaries.go:44] Found k8s binaries, skipping transfer
	I0128 11:55:54.662224   46693 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0128 11:55:54.669754   46693 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (486 bytes)
	I0128 11:55:54.683200   46693 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0128 11:55:54.697119   46693 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2217 bytes)
	I0128 11:55:54.711028   46693 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0128 11:55:54.715716   46693 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0128 11:55:54.726346   46693 certs.go:56] Setting up /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/newest-cni-573000 for IP: 192.168.67.2
	I0128 11:55:54.726380   46693 certs.go:186] acquiring lock for shared ca certs: {Name:mk223e4eab41546e140aa3e3e480564c04fddab4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0128 11:55:54.726565   46693 certs.go:195] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15565-24808/.minikube/ca.key
	I0128 11:55:54.726630   46693 certs.go:195] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15565-24808/.minikube/proxy-client-ca.key
	I0128 11:55:54.726725   46693 certs.go:311] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/newest-cni-573000/client.key
	I0128 11:55:54.726787   46693 certs.go:311] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/newest-cni-573000/apiserver.key.c7fa3a9e
	I0128 11:55:54.726849   46693 certs.go:311] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/newest-cni-573000/proxy-client.key
	I0128 11:55:54.727064   46693 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-24808/.minikube/certs/Users/jenkins/minikube-integration/15565-24808/.minikube/certs/25982.pem (1338 bytes)
	W0128 11:55:54.727102   46693 certs.go:397] ignoring /Users/jenkins/minikube-integration/15565-24808/.minikube/certs/Users/jenkins/minikube-integration/15565-24808/.minikube/certs/25982_empty.pem, impossibly tiny 0 bytes
	I0128 11:55:54.727112   46693 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-24808/.minikube/certs/Users/jenkins/minikube-integration/15565-24808/.minikube/certs/ca-key.pem (1675 bytes)
	I0128 11:55:54.727147   46693 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-24808/.minikube/certs/Users/jenkins/minikube-integration/15565-24808/.minikube/certs/ca.pem (1082 bytes)
	I0128 11:55:54.727194   46693 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-24808/.minikube/certs/Users/jenkins/minikube-integration/15565-24808/.minikube/certs/cert.pem (1123 bytes)
	I0128 11:55:54.727231   46693 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-24808/.minikube/certs/Users/jenkins/minikube-integration/15565-24808/.minikube/certs/key.pem (1675 bytes)
	I0128 11:55:54.727297   46693 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-24808/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15565-24808/.minikube/files/etc/ssl/certs/259822.pem (1708 bytes)
	I0128 11:55:54.728885   46693 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/newest-cni-573000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0128 11:55:54.747560   46693 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/newest-cni-573000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0128 11:55:54.765959   46693 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/newest-cni-573000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0128 11:55:54.783488   46693 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/newest-cni-573000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0128 11:55:54.801078   46693 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-24808/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0128 11:55:54.818479   46693 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-24808/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0128 11:55:54.835743   46693 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-24808/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0128 11:55:54.852813   46693 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-24808/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0128 11:55:54.870492   46693 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-24808/.minikube/certs/25982.pem --> /usr/share/ca-certificates/25982.pem (1338 bytes)
	I0128 11:55:54.888196   46693 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-24808/.minikube/files/etc/ssl/certs/259822.pem --> /usr/share/ca-certificates/259822.pem (1708 bytes)
	I0128 11:55:54.905571   46693 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-24808/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0128 11:55:54.922881   46693 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (772 bytes)
	I0128 11:55:54.935818   46693 ssh_runner.go:195] Run: openssl version
	I0128 11:55:54.941423   46693 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/259822.pem && ln -fs /usr/share/ca-certificates/259822.pem /etc/ssl/certs/259822.pem"
	I0128 11:55:54.949904   46693 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/259822.pem
	I0128 11:55:54.953965   46693 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Jan 28 18:44 /usr/share/ca-certificates/259822.pem
	I0128 11:55:54.954013   46693 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/259822.pem
	I0128 11:55:54.959398   46693 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/259822.pem /etc/ssl/certs/3ec20f2e.0"
	I0128 11:55:54.967068   46693 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0128 11:55:54.975283   46693 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0128 11:55:54.979533   46693 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Jan 28 18:40 /usr/share/ca-certificates/minikubeCA.pem
	I0128 11:55:54.979586   46693 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0128 11:55:54.985247   46693 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0128 11:55:54.993115   46693 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/25982.pem && ln -fs /usr/share/ca-certificates/25982.pem /etc/ssl/certs/25982.pem"
	I0128 11:55:55.001335   46693 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/25982.pem
	I0128 11:55:55.005342   46693 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Jan 28 18:44 /usr/share/ca-certificates/25982.pem
	I0128 11:55:55.005405   46693 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/25982.pem
	I0128 11:55:55.010795   46693 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/25982.pem /etc/ssl/certs/51391683.0"
	I0128 11:55:55.018350   46693 kubeadm.go:401] StartCluster: {Name:newest-cni-573000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:newest-cni-573000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDo
main:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0
CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0128 11:55:55.018459   46693 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0128 11:55:55.042079   46693 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0128 11:55:55.050139   46693 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I0128 11:55:55.050150   46693 kubeadm.go:633] restartCluster start
	I0128 11:55:55.050198   46693 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0128 11:55:55.057387   46693 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:55:55.057471   46693 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-573000
	I0128 11:55:55.115913   46693 kubeconfig.go:135] verify returned: extract IP: "newest-cni-573000" does not appear in /Users/jenkins/minikube-integration/15565-24808/kubeconfig
	I0128 11:55:55.116092   46693 kubeconfig.go:146] "newest-cni-573000" context is missing from /Users/jenkins/minikube-integration/15565-24808/kubeconfig - will repair!
	I0128 11:55:55.116420   46693 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-24808/kubeconfig: {Name:mkd8086baee7daec2b28ba7939ebfa1d8419f5f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0128 11:55:55.117795   46693 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0128 11:55:55.125795   46693 api_server.go:165] Checking apiserver status ...
	I0128 11:55:55.125854   46693 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:55:55.134488   46693 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:55:55.635539   46693 api_server.go:165] Checking apiserver status ...
	I0128 11:55:55.635696   46693 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:55:55.646652   46693 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:55:56.136132   46693 api_server.go:165] Checking apiserver status ...
	I0128 11:55:56.136287   46693 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:55:56.147517   46693 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:55:56.636094   46693 api_server.go:165] Checking apiserver status ...
	I0128 11:55:56.636244   46693 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:55:56.647064   46693 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:55:57.135062   46693 api_server.go:165] Checking apiserver status ...
	I0128 11:55:57.135302   46693 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:55:57.146541   46693 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:55:57.635479   46693 api_server.go:165] Checking apiserver status ...
	I0128 11:55:57.635709   46693 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:55:57.646754   46693 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:55:58.135695   46693 api_server.go:165] Checking apiserver status ...
	I0128 11:55:58.135944   46693 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:55:58.146927   46693 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:55:58.636660   46693 api_server.go:165] Checking apiserver status ...
	I0128 11:55:58.636873   46693 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:55:58.648345   46693 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:55:59.135169   46693 api_server.go:165] Checking apiserver status ...
	I0128 11:55:59.135413   46693 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:55:59.146342   46693 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:55:59.636641   46693 api_server.go:165] Checking apiserver status ...
	I0128 11:55:59.636797   46693 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:55:59.647979   46693 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:56:00.135889   46693 api_server.go:165] Checking apiserver status ...
	I0128 11:56:00.136097   46693 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:56:00.147306   46693 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:56:00.634871   46693 api_server.go:165] Checking apiserver status ...
	I0128 11:56:00.634976   46693 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:56:00.646279   46693 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:56:01.134887   46693 api_server.go:165] Checking apiserver status ...
	I0128 11:56:01.135111   46693 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:56:01.146052   46693 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:56:01.635199   46693 api_server.go:165] Checking apiserver status ...
	I0128 11:56:01.635370   46693 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:56:01.646660   46693 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:56:02.134724   46693 api_server.go:165] Checking apiserver status ...
	I0128 11:56:02.134833   46693 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:56:02.145599   46693 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:56:02.635168   46693 api_server.go:165] Checking apiserver status ...
	I0128 11:56:02.635282   46693 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:56:02.646336   46693 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:56:03.135839   46693 api_server.go:165] Checking apiserver status ...
	I0128 11:56:03.136054   46693 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:56:03.147013   46693 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:56:03.635042   46693 api_server.go:165] Checking apiserver status ...
	I0128 11:56:03.635154   46693 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:56:03.645958   46693 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:56:04.136365   46693 api_server.go:165] Checking apiserver status ...
	I0128 11:56:04.136611   46693 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:56:04.147718   46693 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:56:04.635087   46693 api_server.go:165] Checking apiserver status ...
	I0128 11:56:04.635299   46693 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:56:04.646304   46693 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:56:05.135467   46693 api_server.go:165] Checking apiserver status ...
	I0128 11:56:05.135581   46693 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:56:05.146746   46693 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:56:05.146755   46693 api_server.go:165] Checking apiserver status ...
	I0128 11:56:05.146809   46693 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:56:05.155384   46693 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:56:05.155397   46693 kubeadm.go:608] needs reconfigure: apiserver error: timed out waiting for the condition
	I0128 11:56:05.155408   46693 kubeadm.go:1120] stopping kube-system containers ...
	I0128 11:56:05.155483   46693 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0128 11:56:05.181177   46693 docker.go:456] Stopping containers: [1f3193bbba80 67ed8f37b447 4d0967b9fef3 454d3e049680 fcceea13b38b be2b41fb4917 9de1e876c481 9bbf0acf0071 1fb4e16fb3fd f55164823614 de9db767baee 919b5af00c14 bff4a689b514 103ceff9eff9]
	I0128 11:56:05.181268   46693 ssh_runner.go:195] Run: docker stop 1f3193bbba80 67ed8f37b447 4d0967b9fef3 454d3e049680 fcceea13b38b be2b41fb4917 9de1e876c481 9bbf0acf0071 1fb4e16fb3fd f55164823614 de9db767baee 919b5af00c14 bff4a689b514 103ceff9eff9
	I0128 11:56:05.206095   46693 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0128 11:56:05.217684   46693 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0128 11:56:05.226072   46693 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Jan 28 19:55 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Jan 28 19:55 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2007 Jan 28 19:55 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Jan 28 19:55 /etc/kubernetes/scheduler.conf
	
	I0128 11:56:05.226141   46693 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0128 11:56:05.234091   46693 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0128 11:56:05.241956   46693 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0128 11:56:05.249879   46693 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:56:05.249950   46693 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0128 11:56:05.257712   46693 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0128 11:56:05.265212   46693 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:56:05.265266   46693 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0128 11:56:05.272639   46693 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0128 11:56:05.280336   46693 kubeadm.go:710] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0128 11:56:05.280349   46693 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0128 11:56:05.334297   46693 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0128 11:56:06.258654   46693 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0128 11:56:06.389862   46693 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0128 11:56:06.447070   46693 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0128 11:56:06.549631   46693 api_server.go:51] waiting for apiserver process to appear ...
	I0128 11:56:06.549702   46693 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:56:07.059515   46693 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:56:07.559536   46693 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:56:07.626794   46693 api_server.go:71] duration metric: took 1.077159259s to wait for apiserver process to appear ...
	I0128 11:56:07.626816   46693 api_server.go:87] waiting for apiserver healthz status ...
	I0128 11:56:07.626836   46693 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:64052/healthz ...
	I0128 11:56:07.628000   46693 api_server.go:268] stopped: https://127.0.0.1:64052/healthz: Get "https://127.0.0.1:64052/healthz": EOF
	I0128 11:56:08.128464   46693 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:64052/healthz ...
	I0128 11:56:10.584463   46693 api_server.go:278] https://127.0.0.1:64052/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0128 11:56:10.584481   46693 api_server.go:102] status: https://127.0.0.1:64052/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0128 11:56:10.630107   46693 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:64052/healthz ...
	I0128 11:56:10.637948   46693 api_server.go:278] https://127.0.0.1:64052/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0128 11:56:10.637978   46693 api_server.go:102] status: https://127.0.0.1:64052/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0128 11:56:11.129553   46693 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:64052/healthz ...
	I0128 11:56:11.136072   46693 api_server.go:278] https://127.0.0.1:64052/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0128 11:56:11.136087   46693 api_server.go:102] status: https://127.0.0.1:64052/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0128 11:56:11.628129   46693 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:64052/healthz ...
	I0128 11:56:11.633705   46693 api_server.go:278] https://127.0.0.1:64052/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0128 11:56:11.633722   46693 api_server.go:102] status: https://127.0.0.1:64052/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0128 11:56:12.128078   46693 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:64052/healthz ...
	I0128 11:56:12.132878   46693 api_server.go:278] https://127.0.0.1:64052/healthz returned 200:
	ok
	I0128 11:56:12.139560   46693 api_server.go:140] control plane version: v1.26.1
	I0128 11:56:12.139580   46693 api_server.go:130] duration metric: took 4.512746305s to wait for apiserver health ...
	I0128 11:56:12.139587   46693 cni.go:84] Creating CNI manager for ""
	I0128 11:56:12.139605   46693 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0128 11:56:12.179626   46693 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0128 11:56:12.217774   46693 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0128 11:56:12.229002   46693 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0128 11:56:12.244621   46693 system_pods.go:43] waiting for kube-system pods to appear ...
	I0128 11:56:12.252529   46693 system_pods.go:59] 8 kube-system pods found
	I0128 11:56:12.252545   46693 system_pods.go:61] "coredns-787d4945fb-f565f" [3bc748c7-f81d-4a48-bdf6-1f0c07c3f810] Running
	I0128 11:56:12.252551   46693 system_pods.go:61] "etcd-newest-cni-573000" [4b3ec313-3858-4d74-b3e0-056becc64aea] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0128 11:56:12.252556   46693 system_pods.go:61] "kube-apiserver-newest-cni-573000" [3688b6cb-939e-4478-a899-27c65613b1a8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0128 11:56:12.252561   46693 system_pods.go:61] "kube-controller-manager-newest-cni-573000" [f3a8093f-f907-4918-bcf6-54cc3a3f578b] Running
	I0128 11:56:12.252564   46693 system_pods.go:61] "kube-proxy-bc256" [e9a1c26a-838f-4673-abbe-4ad2f59eacad] Running
	I0128 11:56:12.252568   46693 system_pods.go:61] "kube-scheduler-newest-cni-573000" [c215b6ec-c498-474a-a2a3-1979d9aa6715] Running
	I0128 11:56:12.252572   46693 system_pods.go:61] "metrics-server-7997d45854-c7fg8" [8a0d4760-1341-4b38-b663-68c393ab3d60] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0128 11:56:12.252575   46693 system_pods.go:61] "storage-provisioner" [87ae38f1-5c6b-43aa-aa1b-219b8c2d65f9] Running
	I0128 11:56:12.252579   46693 system_pods.go:74] duration metric: took 7.946058ms to wait for pod list to return data ...
	I0128 11:56:12.252584   46693 node_conditions.go:102] verifying NodePressure condition ...
	I0128 11:56:12.255770   46693 node_conditions.go:122] node storage ephemeral capacity is 107016164Ki
	I0128 11:56:12.255787   46693 node_conditions.go:123] node cpu capacity is 6
	I0128 11:56:12.255797   46693 node_conditions.go:105] duration metric: took 3.208691ms to run NodePressure ...
	I0128 11:56:12.255808   46693 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0128 11:56:12.615594   46693 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0128 11:56:12.627388   46693 ops.go:34] apiserver oom_adj: -16
	I0128 11:56:12.627408   46693 kubeadm.go:637] restartCluster took 17.57720799s
	I0128 11:56:12.627418   46693 kubeadm.go:403] StartCluster complete in 17.609032348s
	I0128 11:56:12.627428   46693 settings.go:142] acquiring lock: {Name:mkb81e67ff3b64beaca5a3176f054172b211c785 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0128 11:56:12.627508   46693 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/15565-24808/kubeconfig
	I0128 11:56:12.628078   46693 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-24808/kubeconfig: {Name:mkd8086baee7daec2b28ba7939ebfa1d8419f5f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0128 11:56:12.628359   46693 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0128 11:56:12.628406   46693 addons.go:489] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0128 11:56:12.628535   46693 addons.go:65] Setting storage-provisioner=true in profile "newest-cni-573000"
	I0128 11:56:12.628553   46693 addons.go:227] Setting addon storage-provisioner=true in "newest-cni-573000"
	I0128 11:56:12.628554   46693 config.go:180] Loaded profile config "newest-cni-573000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	W0128 11:56:12.628562   46693 addons.go:236] addon storage-provisioner should already be in state true
	I0128 11:56:12.628546   46693 addons.go:65] Setting default-storageclass=true in profile "newest-cni-573000"
	I0128 11:56:12.628570   46693 addons.go:65] Setting metrics-server=true in profile "newest-cni-573000"
	I0128 11:56:12.628611   46693 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-573000"
	I0128 11:56:12.628619   46693 addons.go:227] Setting addon metrics-server=true in "newest-cni-573000"
	I0128 11:56:12.628605   46693 addons.go:65] Setting dashboard=true in profile "newest-cni-573000"
	W0128 11:56:12.628633   46693 addons.go:236] addon metrics-server should already be in state true
	I0128 11:56:12.628643   46693 addons.go:227] Setting addon dashboard=true in "newest-cni-573000"
	W0128 11:56:12.628653   46693 addons.go:236] addon dashboard should already be in state true
	I0128 11:56:12.628673   46693 host.go:66] Checking if "newest-cni-573000" exists ...
	I0128 11:56:12.628685   46693 host.go:66] Checking if "newest-cni-573000" exists ...
	I0128 11:56:12.628696   46693 host.go:66] Checking if "newest-cni-573000" exists ...
	I0128 11:56:12.629066   46693 cli_runner.go:164] Run: docker container inspect newest-cni-573000 --format={{.State.Status}}
	I0128 11:56:12.629173   46693 cli_runner.go:164] Run: docker container inspect newest-cni-573000 --format={{.State.Status}}
	I0128 11:56:12.629227   46693 cli_runner.go:164] Run: docker container inspect newest-cni-573000 --format={{.State.Status}}
	I0128 11:56:12.629921   46693 cli_runner.go:164] Run: docker container inspect newest-cni-573000 --format={{.State.Status}}
	I0128 11:56:12.636687   46693 kapi.go:248] "coredns" deployment in "kube-system" namespace and "newest-cni-573000" context rescaled to 1 replicas
	I0128 11:56:12.636731   46693 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0128 11:56:12.660066   46693 out.go:177] * Verifying Kubernetes components...
	I0128 11:56:12.700849   46693 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0128 11:56:12.733909   46693 addons.go:227] Setting addon default-storageclass=true in "newest-cni-573000"
	I0128 11:56:12.787823   46693 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0128 11:56:12.746138   46693 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0128 11:56:12.766964   46693 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	W0128 11:56:12.787842   46693 addons.go:236] addon default-storageclass should already be in state true
	I0128 11:56:12.809163   46693 host.go:66] Checking if "newest-cni-573000" exists ...
	I0128 11:56:12.830097   46693 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0128 11:56:12.888144   46693 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0128 11:56:12.888192   46693 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0128 11:56:12.926134   46693 addons.go:419] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0128 11:56:12.834939   46693 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-573000
	I0128 11:56:12.850810   46693 addons.go:419] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0128 11:56:12.926194   46693 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0128 11:56:12.851142   46693 cli_runner.go:164] Run: docker container inspect newest-cni-573000 --format={{.State.Status}}
	I0128 11:56:12.926215   46693 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-573000
	I0128 11:56:12.834915   46693 start.go:892] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0128 11:56:12.926149   46693 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0128 11:56:12.926291   46693 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-573000
	I0128 11:56:12.926357   46693 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-573000
	I0128 11:56:13.029903   46693 api_server.go:51] waiting for apiserver process to appear ...
	I0128 11:56:13.029904   46693 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:64053 SSHKeyPath:/Users/jenkins/minikube-integration/15565-24808/.minikube/machines/newest-cni-573000/id_rsa Username:docker}
	I0128 11:56:13.030027   46693 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:56:13.030442   46693 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:64053 SSHKeyPath:/Users/jenkins/minikube-integration/15565-24808/.minikube/machines/newest-cni-573000/id_rsa Username:docker}
	I0128 11:56:13.031336   46693 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
	I0128 11:56:13.031349   46693 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0128 11:56:13.031440   46693 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-573000
	I0128 11:56:13.034579   46693 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:64053 SSHKeyPath:/Users/jenkins/minikube-integration/15565-24808/.minikube/machines/newest-cni-573000/id_rsa Username:docker}
	I0128 11:56:13.056544   46693 api_server.go:71] duration metric: took 419.777ms to wait for apiserver process to appear ...
	I0128 11:56:13.056570   46693 api_server.go:87] waiting for apiserver healthz status ...
	I0128 11:56:13.056588   46693 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:64052/healthz ...
	I0128 11:56:13.062926   46693 api_server.go:278] https://127.0.0.1:64052/healthz returned 200:
	ok
	I0128 11:56:13.065543   46693 api_server.go:140] control plane version: v1.26.1
	I0128 11:56:13.065555   46693 api_server.go:130] duration metric: took 8.97953ms to wait for apiserver health ...
	I0128 11:56:13.065561   46693 system_pods.go:43] waiting for kube-system pods to appear ...
	I0128 11:56:13.101221   46693 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:64053 SSHKeyPath:/Users/jenkins/minikube-integration/15565-24808/.minikube/machines/newest-cni-573000/id_rsa Username:docker}
	I0128 11:56:13.119426   46693 system_pods.go:59] 8 kube-system pods found
	I0128 11:56:13.119452   46693 system_pods.go:61] "coredns-787d4945fb-f565f" [3bc748c7-f81d-4a48-bdf6-1f0c07c3f810] Running
	I0128 11:56:13.119462   46693 system_pods.go:61] "etcd-newest-cni-573000" [4b3ec313-3858-4d74-b3e0-056becc64aea] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0128 11:56:13.119472   46693 system_pods.go:61] "kube-apiserver-newest-cni-573000" [3688b6cb-939e-4478-a899-27c65613b1a8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0128 11:56:13.119485   46693 system_pods.go:61] "kube-controller-manager-newest-cni-573000" [f3a8093f-f907-4918-bcf6-54cc3a3f578b] Running
	I0128 11:56:13.119490   46693 system_pods.go:61] "kube-proxy-bc256" [e9a1c26a-838f-4673-abbe-4ad2f59eacad] Running
	I0128 11:56:13.119499   46693 system_pods.go:61] "kube-scheduler-newest-cni-573000" [c215b6ec-c498-474a-a2a3-1979d9aa6715] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0128 11:56:13.119507   46693 system_pods.go:61] "metrics-server-7997d45854-c7fg8" [8a0d4760-1341-4b38-b663-68c393ab3d60] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0128 11:56:13.119513   46693 system_pods.go:61] "storage-provisioner" [87ae38f1-5c6b-43aa-aa1b-219b8c2d65f9] Running
	I0128 11:56:13.119519   46693 system_pods.go:74] duration metric: took 53.953713ms to wait for pod list to return data ...
	I0128 11:56:13.119541   46693 default_sa.go:34] waiting for default service account to be created ...
	I0128 11:56:13.122196   46693 default_sa.go:45] found service account: "default"
	I0128 11:56:13.122211   46693 default_sa.go:55] duration metric: took 2.661751ms for default service account to be created ...
	I0128 11:56:13.122221   46693 kubeadm.go:578] duration metric: took 485.460927ms to wait for : map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] ...
	I0128 11:56:13.122235   46693 node_conditions.go:102] verifying NodePressure condition ...
	I0128 11:56:13.126083   46693 node_conditions.go:122] node storage ephemeral capacity is 107016164Ki
	I0128 11:56:13.126099   46693 node_conditions.go:123] node cpu capacity is 6
	I0128 11:56:13.126106   46693 node_conditions.go:105] duration metric: took 3.866754ms to run NodePressure ...
	I0128 11:56:13.126114   46693 start.go:228] waiting for startup goroutines ...
	I0128 11:56:13.228988   46693 addons.go:419] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0128 11:56:13.229003   46693 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1820 bytes)
	I0128 11:56:13.231479   46693 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0128 11:56:13.232511   46693 addons.go:419] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0128 11:56:13.232524   46693 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0128 11:56:13.249867   46693 addons.go:419] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0128 11:56:13.249884   46693 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0128 11:56:13.249930   46693 addons.go:419] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0128 11:56:13.249940   46693 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0128 11:56:13.333018   46693 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0128 11:56:13.334189   46693 addons.go:419] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0128 11:56:13.334201   46693 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0128 11:56:13.337376   46693 addons.go:419] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0128 11:56:13.337395   46693 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0128 11:56:13.420951   46693 addons.go:419] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0128 11:56:13.420969   46693 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0128 11:56:13.425699   46693 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0128 11:56:13.446061   46693 addons.go:419] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0128 11:56:13.446078   46693 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0128 11:56:13.534682   46693 addons.go:419] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0128 11:56:13.534698   46693 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0128 11:56:13.613813   46693 addons.go:419] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0128 11:56:13.613848   46693 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0128 11:56:13.637617   46693 addons.go:419] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0128 11:56:13.637641   46693 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0128 11:56:13.733422   46693 addons.go:419] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0128 11:56:13.733443   46693 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0128 11:56:13.822378   46693 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0128 11:56:14.620857   46693 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.287799464s)
	I0128 11:56:14.620868   46693 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.38935244s)
	I0128 11:56:14.632478   46693 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.206742362s)
	I0128 11:56:14.632507   46693 addons.go:457] Verifying addon metrics-server=true in "newest-cni-573000"
	I0128 11:56:14.768900   46693 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-573000 addons enable metrics-server	
	
	
	I0128 11:56:14.789797   46693 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0128 11:56:14.832912   46693 addons.go:492] enable addons completed in 2.204489746s: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0128 11:56:14.832938   46693 start.go:233] waiting for cluster config update ...
	I0128 11:56:14.832955   46693 start.go:240] writing updated cluster config ...
	I0128 11:56:14.833328   46693 ssh_runner.go:195] Run: rm -f paused
	I0128 11:56:14.872692   46693 start.go:555] kubectl: 1.25.4, cluster: 1.26.1 (minor skew: 1)
	I0128 11:56:14.893961   46693 out.go:177] * Done! kubectl is now configured to use "newest-cni-573000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Sat 2023-01-28 19:38:32 UTC, end at Sat 2023-01-28 19:56:19 UTC. --
	Jan 28 19:38:35 old-k8s-version-182000 systemd[1]: Started Docker Application Container Engine.
	Jan 28 19:38:35 old-k8s-version-182000 systemd[1]: Stopping Docker Application Container Engine...
	Jan 28 19:38:35 old-k8s-version-182000 dockerd[436]: time="2023-01-28T19:38:35.469071572Z" level=info msg="Processing signal 'terminated'"
	Jan 28 19:38:35 old-k8s-version-182000 dockerd[436]: time="2023-01-28T19:38:35.469910601Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jan 28 19:38:35 old-k8s-version-182000 dockerd[436]: time="2023-01-28T19:38:35.470150136Z" level=info msg="Daemon shutdown complete"
	Jan 28 19:38:35 old-k8s-version-182000 systemd[1]: docker.service: Succeeded.
	Jan 28 19:38:35 old-k8s-version-182000 systemd[1]: Stopped Docker Application Container Engine.
	Jan 28 19:38:35 old-k8s-version-182000 systemd[1]: Starting Docker Application Container Engine...
	Jan 28 19:38:35 old-k8s-version-182000 dockerd[621]: time="2023-01-28T19:38:35.523375038Z" level=info msg="Starting up"
	Jan 28 19:38:35 old-k8s-version-182000 dockerd[621]: time="2023-01-28T19:38:35.525046742Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jan 28 19:38:35 old-k8s-version-182000 dockerd[621]: time="2023-01-28T19:38:35.525083922Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jan 28 19:38:35 old-k8s-version-182000 dockerd[621]: time="2023-01-28T19:38:35.525106137Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Jan 28 19:38:35 old-k8s-version-182000 dockerd[621]: time="2023-01-28T19:38:35.525113757Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jan 28 19:38:35 old-k8s-version-182000 dockerd[621]: time="2023-01-28T19:38:35.526268026Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jan 28 19:38:35 old-k8s-version-182000 dockerd[621]: time="2023-01-28T19:38:35.526308462Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jan 28 19:38:35 old-k8s-version-182000 dockerd[621]: time="2023-01-28T19:38:35.526321446Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Jan 28 19:38:35 old-k8s-version-182000 dockerd[621]: time="2023-01-28T19:38:35.526328970Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jan 28 19:38:35 old-k8s-version-182000 dockerd[621]: time="2023-01-28T19:38:35.533397728Z" level=info msg="Loading containers: start."
	Jan 28 19:38:35 old-k8s-version-182000 dockerd[621]: time="2023-01-28T19:38:35.611043216Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jan 28 19:38:35 old-k8s-version-182000 dockerd[621]: time="2023-01-28T19:38:35.644702151Z" level=info msg="Loading containers: done."
	Jan 28 19:38:35 old-k8s-version-182000 dockerd[621]: time="2023-01-28T19:38:35.653097814Z" level=info msg="Docker daemon" commit=6051f14 graphdriver(s)=overlay2 version=20.10.23
	Jan 28 19:38:35 old-k8s-version-182000 dockerd[621]: time="2023-01-28T19:38:35.653195626Z" level=info msg="Daemon has completed initialization"
	Jan 28 19:38:35 old-k8s-version-182000 systemd[1]: Started Docker Application Container Engine.
	Jan 28 19:38:35 old-k8s-version-182000 dockerd[621]: time="2023-01-28T19:38:35.675039977Z" level=info msg="API listen on [::]:2376"
	Jan 28 19:38:35 old-k8s-version-182000 dockerd[621]: time="2023-01-28T19:38:35.681266325Z" level=info msg="API listen on /var/run/docker.sock"
	
	* 
	* ==> container status <==
	* CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
	time="2023-01-28T19:56:22Z" level=fatal msg="connect: connect endpoint 'unix:///var/run/dockershim.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> kernel <==
	*  19:56:22 up  2:55,  0 users,  load average: 1.05, 1.21, 1.26
	Linux old-k8s-version-182000 5.15.49-linuxkit #1 SMP Tue Sep 13 07:51:46 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.5 LTS"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Sat 2023-01-28 19:38:32 UTC, end at Sat 2023-01-28 19:56:22 UTC. --
	Jan 28 19:56:21 old-k8s-version-182000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jan 28 19:56:21 old-k8s-version-182000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 928.
	Jan 28 19:56:21 old-k8s-version-182000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jan 28 19:56:21 old-k8s-version-182000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jan 28 19:56:22 old-k8s-version-182000 kubelet[24986]: I0128 19:56:22.000793   24986 server.go:410] Version: v1.16.0
	Jan 28 19:56:22 old-k8s-version-182000 kubelet[24986]: I0128 19:56:22.001016   24986 plugins.go:100] No cloud provider specified.
	Jan 28 19:56:22 old-k8s-version-182000 kubelet[24986]: I0128 19:56:22.001028   24986 server.go:773] Client rotation is on, will bootstrap in background
	Jan 28 19:56:22 old-k8s-version-182000 kubelet[24986]: I0128 19:56:22.002937   24986 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jan 28 19:56:22 old-k8s-version-182000 kubelet[24986]: W0128 19:56:22.003638   24986 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Jan 28 19:56:22 old-k8s-version-182000 kubelet[24986]: W0128 19:56:22.003711   24986 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Jan 28 19:56:22 old-k8s-version-182000 kubelet[24986]: F0128 19:56:22.003740   24986 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Jan 28 19:56:22 old-k8s-version-182000 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jan 28 19:56:22 old-k8s-version-182000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jan 28 19:56:22 old-k8s-version-182000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 929.
	Jan 28 19:56:22 old-k8s-version-182000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jan 28 19:56:22 old-k8s-version-182000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jan 28 19:56:22 old-k8s-version-182000 kubelet[25026]: I0128 19:56:22.745132   25026 server.go:410] Version: v1.16.0
	Jan 28 19:56:22 old-k8s-version-182000 kubelet[25026]: I0128 19:56:22.745338   25026 plugins.go:100] No cloud provider specified.
	Jan 28 19:56:22 old-k8s-version-182000 kubelet[25026]: I0128 19:56:22.745348   25026 server.go:773] Client rotation is on, will bootstrap in background
	Jan 28 19:56:22 old-k8s-version-182000 kubelet[25026]: I0128 19:56:22.747015   25026 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jan 28 19:56:22 old-k8s-version-182000 kubelet[25026]: W0128 19:56:22.747750   25026 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Jan 28 19:56:22 old-k8s-version-182000 kubelet[25026]: W0128 19:56:22.747823   25026 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Jan 28 19:56:22 old-k8s-version-182000 kubelet[25026]: F0128 19:56:22.747846   25026 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Jan 28 19:56:22 old-k8s-version-182000 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jan 28 19:56:22 old-k8s-version-182000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0128 11:56:22.392204   46900 logs.go:193] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-182000 -n old-k8s-version-182000

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-182000 -n old-k8s-version-182000: exit status 2 (491.981031ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-182000" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (575.70s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (554.76s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62983/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0128 11:56:39.815395   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/bridge-732000/client.crt: no such file or directory
E0128 11:56:42.518237   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/custom-flannel-732000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62983/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62983/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0128 11:57:03.475881   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/addons-582000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62983/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0128 11:57:07.321614   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/functional-251000/client.crt: no such file or directory
E0128 11:57:09.283249   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/kubenet-732000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62983/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62983/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62983/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62983/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62983/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62983/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0128 11:58:18.507095   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/false-732000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62983/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0128 11:58:30.385357   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/functional-251000/client.crt: no such file or directory
E0128 11:58:33.545981   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/skaffold-054000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62983/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0128 11:58:37.856066   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/calico-732000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62983/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0128 11:58:46.627276   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/kindnet-732000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62983/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0128 11:59:02.935625   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/default-k8s-diff-port-404000/client.crt: no such file or directory
E0128 11:59:02.941745   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/default-k8s-diff-port-404000/client.crt: no such file or directory
E0128 11:59:02.951894   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/default-k8s-diff-port-404000/client.crt: no such file or directory
E0128 11:59:02.972154   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/default-k8s-diff-port-404000/client.crt: no such file or directory
E0128 11:59:03.013931   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/default-k8s-diff-port-404000/client.crt: no such file or directory
E0128 11:59:03.094180   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/default-k8s-diff-port-404000/client.crt: no such file or directory
E0128 11:59:03.255018   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/default-k8s-diff-port-404000/client.crt: no such file or directory
E0128 11:59:03.576343   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/default-k8s-diff-port-404000/client.crt: no such file or directory
E0128 11:59:04.217058   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/default-k8s-diff-port-404000/client.crt: no such file or directory
E0128 11:59:05.497948   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/default-k8s-diff-port-404000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62983/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0128 11:59:08.058417   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/default-k8s-diff-port-404000/client.crt: no such file or directory
E0128 11:59:13.179682   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/default-k8s-diff-port-404000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62983/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0128 11:59:23.420238   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/default-k8s-diff-port-404000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62983/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62983/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0128 11:59:43.902606   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/default-k8s-diff-port-404000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62983/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0128 12:00:01.610636   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/flannel-732000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62983/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0128 12:00:06.814372   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/auto-732000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62983/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62983/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0128 12:00:24.933855   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/default-k8s-diff-port-404000/client.crt: no such file or directory
E0128 12:00:31.205357   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/no-preload-337000/client.crt: no such file or directory
E0128 12:00:33.041055   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/enable-default-cni-732000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62983/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62983/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62983/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62983/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62983/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62983/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0128 12:01:39.886129   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/bridge-732000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62983/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0128 12:01:42.589195   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/custom-flannel-732000/client.crt: no such file or directory
E0128 12:01:46.855910   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/default-k8s-diff-port-404000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62983/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0128 12:01:54.255992   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/no-preload-337000/client.crt: no such file or directory
E0128 12:02:03.547214   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/addons-582000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62983/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0128 12:02:07.391261   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/functional-251000/client.crt: no such file or directory
E0128 12:02:09.354788   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/kubenet-732000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62983/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62983/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:62983/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded - error from a previous attempt: EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0128 12:03:09.934111   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/auto-732000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0128 12:03:18.579953   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/false-732000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0128 12:03:33.617690   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/skaffold-054000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0128 12:03:37.928462   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/calico-732000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0128 12:03:46.698834   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/kindnet-732000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0128 12:04:03.007675   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/default-k8s-diff-port-404000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0128 12:04:30.699914   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/default-k8s-diff-port-404000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0128 12:04:45.635719   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/custom-flannel-732000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0128 12:05:01.685361   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/flannel-732000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0128 12:05:06.890948   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/auto-732000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0128 12:05:31.211646   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/no-preload-337000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E0128 12:05:33.048385   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/enable-default-cni-732000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: timed out waiting for the condition ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-182000 -n old-k8s-version-182000
start_stop_delete_test.go:287: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-182000 -n old-k8s-version-182000: exit status 2 (406.083277ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:287: status error: exit status 2 (may be ok)
start_stop_delete_test.go:287: "old-k8s-version-182000" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: timed out waiting for the condition
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-182000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-182000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (845ns)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-182000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-182000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-182000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "617ab90eb0df8d30a74a5dc57ab16039882655d65e3025362322b78aeec379ac",
	        "Created": "2023-01-28T19:32:55.313551858Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 692432,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-01-28T19:38:32.66261959Z",
	            "FinishedAt": "2023-01-28T19:38:29.825307287Z"
	        },
	        "Image": "sha256:01c0ce65fff70ab1f019aa14679c46b23331bd108ae899438e589673efaa9c00",
	        "ResolvConfPath": "/var/lib/docker/containers/617ab90eb0df8d30a74a5dc57ab16039882655d65e3025362322b78aeec379ac/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/617ab90eb0df8d30a74a5dc57ab16039882655d65e3025362322b78aeec379ac/hostname",
	        "HostsPath": "/var/lib/docker/containers/617ab90eb0df8d30a74a5dc57ab16039882655d65e3025362322b78aeec379ac/hosts",
	        "LogPath": "/var/lib/docker/containers/617ab90eb0df8d30a74a5dc57ab16039882655d65e3025362322b78aeec379ac/617ab90eb0df8d30a74a5dc57ab16039882655d65e3025362322b78aeec379ac-json.log",
	        "Name": "/old-k8s-version-182000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-182000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-182000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/cf358ab04dbdc1e02cf2f4479b1d8fec29dc7884849cab2c09927d672777c18a-init/diff:/var/lib/docker/overlay2/ebc03c916d1215717cc928cc2ae6bb5febcaf1787682b19b31688cb58ea354df/diff:/var/lib/docker/overlay2/aaa47387c6297b9482eaf2d8291628b9713643f21d066c37435b7e2cb9493e2a/diff:/var/lib/docker/overlay2/f4b2c82f60338b3f859441322400906b78ab112321f53e01c52ec81f29b4b492/diff:/var/lib/docker/overlay2/9425b655d46ca09e43b6484556a0c42b69e0c7947e14ec530546a61f36d3b950/diff:/var/lib/docker/overlay2/7d54571f62200ad4404fb9bb52649136f53eb6d6eedc5a51b22898df9001c1d4/diff:/var/lib/docker/overlay2/a4b4864baac235070d93e0940d897dd3006e6a93d705490108451f8d00ba148f/diff:/var/lib/docker/overlay2/8b092a30ffaf1c9230cef4864afb85d91ceb9fa92e484e3ebf7a31bb7df915bc/diff:/var/lib/docker/overlay2/96ac23e2e494a92e2287115c1a85e160e67543832baaaa3fa9a2351b370d5bd4/diff:/var/lib/docker/overlay2/c1e68f2d6c4ce95b33833a8d750a79aeaef16cc7d0a556369a63014eef7597b6/diff:/var/lib/docker/overlay2/89b3fe
fdd4bd8243826ccca31dec1aef9f91ad82adda108147b89c096792dfa5/diff:/var/lib/docker/overlay2/0b09be009751a25e4cbe64835151f1a814c4547d2c513994ae82f8093a22040d/diff:/var/lib/docker/overlay2/dc9a2b1667d67c8f0269966ef8862a4ffcfe4b68ad45f12e3ff27075c595c716/diff:/var/lib/docker/overlay2/d41ab03c6154f92111515bffc37c1d75570fa697ffa380631216096b52bfbc1b/diff:/var/lib/docker/overlay2/549b2cfc0a7d4f81f8d2624b1b2069b66d159ecd7b38148b476bb7a1b9e29100/diff:/var/lib/docker/overlay2/ecd7a1e2ce66c77afcf87a94383f14763eca5c8732c76b1b83765a278db91228/diff:/var/lib/docker/overlay2/6361f06734d312adc4271443765c435c4a7600356d1c6597fb7fa440cf1a2eb4/diff:/var/lib/docker/overlay2/cc7751a853d09ad130dccc1c835daa64e6ba830331636aca6a2a98da95ab52c1/diff:/var/lib/docker/overlay2/6612588f68e64e123a6e5cf6f6da339ee6072f8054f936be6d4f799d6c683e75/diff:/var/lib/docker/overlay2/673e42d3b5998d60bbb5c7c40da29902c3ea35068701966a7e3fd8a923d4a37a/diff:/var/lib/docker/overlay2/115d8a9e167d9b574c1d945d85d46da3ad2688595502524702976fc9b1051464/diff:/var/lib/d
ocker/overlay2/a8a2380c37eec6348eac27c7ee660b1f1d1ef94786cd68f197218066d99d80dd/diff:/var/lib/docker/overlay2/9261c5669bb687df6f9ad1ac00615cdf03b913ab9b3e1ca1a1f1cb6420702325/diff:/var/lib/docker/overlay2/46213bfa914da7941cec1c2c32185400a83c35a74274f39d74ad203ee5688535/diff:/var/lib/docker/overlay2/45ce48252aa0eeb54f2a1c27e570f8e85ac4a1d28a947b81618e608c64e3a700/diff:/var/lib/docker/overlay2/5631fae0fb00254444e3cc059b8b6062ee02fd66eefdf043970883f6724ce682/diff:/var/lib/docker/overlay2/e23ece345ff4dee7248a8e8cbd15cdbaef319d286a6490377fc337feecd6be04/diff:/var/lib/docker/overlay2/004bedb9de21965ae003d62b64a9e6506a10afa328b9af469eb51d3920d9c3b6/diff:/var/lib/docker/overlay2/c0ed692b610507b4315c2a43e64bd682bfdae35a7b4bcba499bba9cfb33121c4/diff:/var/lib/docker/overlay2/8396057830d1ed01256a5ee803b6310c8bf4c6ef3fb0f958240557352a12f3db/diff:/var/lib/docker/overlay2/c8024a29733fe87d5aad124df5ff33e97bcca94ee9fee196a6d51c9474692733/diff:/var/lib/docker/overlay2/9e59b455e481cdabd17790daddef6872e7b6452d1e8de1526998d92ab5f
c008f/diff:/var/lib/docker/overlay2/88cc3ecb1b979acbac3227fd30f3e879629eff2b47f416b3069463900f3e40e0/diff:/var/lib/docker/overlay2/5ef1713ef4e296c4637ccd2823c2b80cb5c53cd757947ff3fc17b7dd2d2dd21c/diff:/var/lib/docker/overlay2/17a697eb9c335b2a20567e3615e2222a113542532402dc62978ff64d65860c5e/diff:/var/lib/docker/overlay2/69e01a154090c42cbf63b88c7e922d483dd2d393fbab64725f79b3ff3800c3c1/diff:/var/lib/docker/overlay2/6ed77ee7b45230567431b0cbfb9cefedfd3f3d7eecf271f20a711bbcc4fdb1b3/diff:/var/lib/docker/overlay2/3bf095c6d6fe582e91d9a9ab0dc5b4d168f93f28ec2488a88f60b63ebf1e22f7/diff:/var/lib/docker/overlay2/cfc3bbbdc2702c8d23d146885b4da1a4482e8af461b5c87426fab855f97417a0/diff:/var/lib/docker/overlay2/1c4944ff8930ced790954d78530aeaf94eeb6c7367b474bdfbad30345cc1276a/diff:/var/lib/docker/overlay2/44cf435555d16eb68c4149bc53e4ae11797c7ddb429332f3d0d36328cb16ea5f/diff:/var/lib/docker/overlay2/4a7b4287594c4da981df984cd6e3910778bfdff2b5560a03d6cdcb589790c8e5/diff:/var/lib/docker/overlay2/76c287aa1bd3a7c3636e82df1bac8ead485e55
7a0fd68fdbfc0d5655d89f7113/diff:/var/lib/docker/overlay2/a2ab65056651b30980d6df9664f682519df2c2fc604d87ddb2bb2ca25b663d5e/diff:/var/lib/docker/overlay2/3a84daa5ad43dd7c27d884672613e37b8a5bed1fa79edee0e951b2e3fa39f21f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cf358ab04dbdc1e02cf2f4479b1d8fec29dc7884849cab2c09927d672777c18a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cf358ab04dbdc1e02cf2f4479b1d8fec29dc7884849cab2c09927d672777c18a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cf358ab04dbdc1e02cf2f4479b1d8fec29dc7884849cab2c09927d672777c18a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-182000",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-182000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-182000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-182000",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-182000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a866b115da76d1500e5c6ec1c87955e1bc3fb30a0609eeb66b3f8fe1f7fa2c1a",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62979"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62980"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62981"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62982"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62983"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/a866b115da76",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-182000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "617ab90eb0df",
	                        "old-k8s-version-182000"
	                    ],
	                    "NetworkID": "56bfdf73bec9b0196848fd6c701661b6f09d89a5213236097da597daf246c910",
	                    "EndpointID": "ded8251749a3d30dcda48b4492f2a9fb69f5ae5dd7d576b06c81313cb7eb59b8",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-182000 -n old-k8s-version-182000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-182000 -n old-k8s-version-182000: exit status 2 (404.435876ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p old-k8s-version-182000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p old-k8s-version-182000 logs -n 25: (3.417753192s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------|------------------------------|---------|--------------------------|---------------------|---------------------|
	| Command |                         Args                         |           Profile            |  User   |         Version          |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|------------------------------|---------|--------------------------|---------------------|---------------------|
	| pause   | -p embed-certs-384000                                | embed-certs-384000           | jenkins | v1.29.0-1674856271-15565 | 28 Jan 23 11:48 PST | 28 Jan 23 11:48 PST |
	|         | --alsologtostderr -v=1                               |                              |         |                          |                     |                     |
	| unpause | -p embed-certs-384000                                | embed-certs-384000           | jenkins | v1.29.0-1674856271-15565 | 28 Jan 23 11:48 PST | 28 Jan 23 11:48 PST |
	|         | --alsologtostderr -v=1                               |                              |         |                          |                     |                     |
	| delete  | -p embed-certs-384000                                | embed-certs-384000           | jenkins | v1.29.0-1674856271-15565 | 28 Jan 23 11:48 PST | 28 Jan 23 11:48 PST |
	| delete  | -p embed-certs-384000                                | embed-certs-384000           | jenkins | v1.29.0-1674856271-15565 | 28 Jan 23 11:48 PST | 28 Jan 23 11:48 PST |
	| delete  | -p                                                   | disable-driver-mounts-244000 | jenkins | v1.29.0-1674856271-15565 | 28 Jan 23 11:48 PST | 28 Jan 23 11:48 PST |
	|         | disable-driver-mounts-244000                         |                              |         |                          |                     |                     |
	| start   | -p                                                   | default-k8s-diff-port-404000 | jenkins | v1.29.0-1674856271-15565 | 28 Jan 23 11:48 PST | 28 Jan 23 11:49 PST |
	|         | default-k8s-diff-port-404000                         |                              |         |                          |                     |                     |
	|         | --memory=2200                                        |                              |         |                          |                     |                     |
	|         | --alsologtostderr --wait=true                        |                              |         |                          |                     |                     |
	|         | --apiserver-port=8444                                |                              |         |                          |                     |                     |
	|         | --driver=docker                                      |                              |         |                          |                     |                     |
	|         | --kubernetes-version=v1.26.1                         |                              |         |                          |                     |                     |
	| addons  | enable metrics-server -p                             | default-k8s-diff-port-404000 | jenkins | v1.29.0-1674856271-15565 | 28 Jan 23 11:49 PST | 28 Jan 23 11:49 PST |
	|         | default-k8s-diff-port-404000                         |                              |         |                          |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4     |                              |         |                          |                     |                     |
	|         | --registries=MetricsServer=fake.domain               |                              |         |                          |                     |                     |
	| stop    | -p                                                   | default-k8s-diff-port-404000 | jenkins | v1.29.0-1674856271-15565 | 28 Jan 23 11:49 PST | 28 Jan 23 11:49 PST |
	|         | default-k8s-diff-port-404000                         |                              |         |                          |                     |                     |
	|         | --alsologtostderr -v=3                               |                              |         |                          |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-404000     | default-k8s-diff-port-404000 | jenkins | v1.29.0-1674856271-15565 | 28 Jan 23 11:49 PST | 28 Jan 23 11:49 PST |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4    |                              |         |                          |                     |                     |
	| start   | -p                                                   | default-k8s-diff-port-404000 | jenkins | v1.29.0-1674856271-15565 | 28 Jan 23 11:49 PST | 28 Jan 23 11:54 PST |
	|         | default-k8s-diff-port-404000                         |                              |         |                          |                     |                     |
	|         | --memory=2200                                        |                              |         |                          |                     |                     |
	|         | --alsologtostderr --wait=true                        |                              |         |                          |                     |                     |
	|         | --apiserver-port=8444                                |                              |         |                          |                     |                     |
	|         | --driver=docker                                      |                              |         |                          |                     |                     |
	|         | --kubernetes-version=v1.26.1                         |                              |         |                          |                     |                     |
	| ssh     | -p                                                   | default-k8s-diff-port-404000 | jenkins | v1.29.0-1674856271-15565 | 28 Jan 23 11:54 PST | 28 Jan 23 11:54 PST |
	|         | default-k8s-diff-port-404000                         |                              |         |                          |                     |                     |
	|         | sudo crictl images -o json                           |                              |         |                          |                     |                     |
	| pause   | -p                                                   | default-k8s-diff-port-404000 | jenkins | v1.29.0-1674856271-15565 | 28 Jan 23 11:54 PST | 28 Jan 23 11:54 PST |
	|         | default-k8s-diff-port-404000                         |                              |         |                          |                     |                     |
	|         | --alsologtostderr -v=1                               |                              |         |                          |                     |                     |
	| unpause | -p                                                   | default-k8s-diff-port-404000 | jenkins | v1.29.0-1674856271-15565 | 28 Jan 23 11:54 PST | 28 Jan 23 11:54 PST |
	|         | default-k8s-diff-port-404000                         |                              |         |                          |                     |                     |
	|         | --alsologtostderr -v=1                               |                              |         |                          |                     |                     |
	| delete  | -p                                                   | default-k8s-diff-port-404000 | jenkins | v1.29.0-1674856271-15565 | 28 Jan 23 11:54 PST | 28 Jan 23 11:54 PST |
	|         | default-k8s-diff-port-404000                         |                              |         |                          |                     |                     |
	| delete  | -p                                                   | default-k8s-diff-port-404000 | jenkins | v1.29.0-1674856271-15565 | 28 Jan 23 11:54 PST | 28 Jan 23 11:54 PST |
	|         | default-k8s-diff-port-404000                         |                              |         |                          |                     |                     |
	| start   | -p newest-cni-573000 --memory=2200 --alsologtostderr | newest-cni-573000            | jenkins | v1.29.0-1674856271-15565 | 28 Jan 23 11:54 PST | 28 Jan 23 11:55 PST |
	|         | --wait=apiserver,system_pods,default_sa              |                              |         |                          |                     |                     |
	|         | --feature-gates ServerSideApply=true                 |                              |         |                          |                     |                     |
	|         | --network-plugin=cni                                 |                              |         |                          |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 |                              |         |                          |                     |                     |
	|         | --driver=docker  --kubernetes-version=v1.26.1        |                              |         |                          |                     |                     |
	| addons  | enable metrics-server -p newest-cni-573000           | newest-cni-573000            | jenkins | v1.29.0-1674856271-15565 | 28 Jan 23 11:55 PST | 28 Jan 23 11:55 PST |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4     |                              |         |                          |                     |                     |
	|         | --registries=MetricsServer=fake.domain               |                              |         |                          |                     |                     |
	| stop    | -p newest-cni-573000                                 | newest-cni-573000            | jenkins | v1.29.0-1674856271-15565 | 28 Jan 23 11:55 PST | 28 Jan 23 11:55 PST |
	|         | --alsologtostderr -v=3                               |                              |         |                          |                     |                     |
	| addons  | enable dashboard -p newest-cni-573000                | newest-cni-573000            | jenkins | v1.29.0-1674856271-15565 | 28 Jan 23 11:55 PST | 28 Jan 23 11:55 PST |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4    |                              |         |                          |                     |                     |
	| start   | -p newest-cni-573000 --memory=2200 --alsologtostderr | newest-cni-573000            | jenkins | v1.29.0-1674856271-15565 | 28 Jan 23 11:55 PST | 28 Jan 23 11:56 PST |
	|         | --wait=apiserver,system_pods,default_sa              |                              |         |                          |                     |                     |
	|         | --feature-gates ServerSideApply=true                 |                              |         |                          |                     |                     |
	|         | --network-plugin=cni                                 |                              |         |                          |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 |                              |         |                          |                     |                     |
	|         | --driver=docker  --kubernetes-version=v1.26.1        |                              |         |                          |                     |                     |
	| ssh     | -p newest-cni-573000 sudo                            | newest-cni-573000            | jenkins | v1.29.0-1674856271-15565 | 28 Jan 23 11:56 PST | 28 Jan 23 11:56 PST |
	|         | crictl images -o json                                |                              |         |                          |                     |                     |
	| pause   | -p newest-cni-573000                                 | newest-cni-573000            | jenkins | v1.29.0-1674856271-15565 | 28 Jan 23 11:56 PST | 28 Jan 23 11:56 PST |
	|         | --alsologtostderr -v=1                               |                              |         |                          |                     |                     |
	| unpause | -p newest-cni-573000                                 | newest-cni-573000            | jenkins | v1.29.0-1674856271-15565 | 28 Jan 23 11:56 PST | 28 Jan 23 11:56 PST |
	|         | --alsologtostderr -v=1                               |                              |         |                          |                     |                     |
	| delete  | -p newest-cni-573000                                 | newest-cni-573000            | jenkins | v1.29.0-1674856271-15565 | 28 Jan 23 11:56 PST | 28 Jan 23 11:56 PST |
	| delete  | -p newest-cni-573000                                 | newest-cni-573000            | jenkins | v1.29.0-1674856271-15565 | 28 Jan 23 11:56 PST | 28 Jan 23 11:56 PST |
	|---------|------------------------------------------------------|------------------------------|---------|--------------------------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/01/28 11:55:44
	Running on machine: MacOS-Agent-3
	Binary: Built with gc go1.19.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0128 11:55:44.585656   46693 out.go:296] Setting OutFile to fd 1 ...
	I0128 11:55:44.585825   46693 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0128 11:55:44.585831   46693 out.go:309] Setting ErrFile to fd 2...
	I0128 11:55:44.585835   46693 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0128 11:55:44.585947   46693 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15565-24808/.minikube/bin
	I0128 11:55:44.586473   46693 out.go:303] Setting JSON to false
	I0128 11:55:44.604636   46693 start.go:125] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":10519,"bootTime":1674925225,"procs":399,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0128 11:55:44.604714   46693 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0128 11:55:44.627086   46693 out.go:177] * [newest-cni-573000] minikube v1.29.0-1674856271-15565 on Darwin 13.2
	I0128 11:55:44.648789   46693 notify.go:220] Checking for updates...
	I0128 11:55:44.670698   46693 out.go:177]   - MINIKUBE_LOCATION=15565
	I0128 11:55:44.713558   46693 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15565-24808/kubeconfig
	I0128 11:55:44.755584   46693 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0128 11:55:44.797390   46693 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0128 11:55:44.839584   46693 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-24808/.minikube
	I0128 11:55:44.860698   46693 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0128 11:55:44.882367   46693 config.go:180] Loaded profile config "newest-cni-573000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0128 11:55:44.883036   46693 driver.go:365] Setting default libvirt URI to qemu:///system
	I0128 11:55:44.944883   46693 docker.go:141] docker version: linux-20.10.22:Docker Desktop 4.16.2 (95914)
	I0128 11:55:44.945016   46693 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0128 11:55:45.090681   46693 info.go:266] docker info: {ID:O4T2:OPZV:SRVN:GAA5:IEOW:TFNR:UPTT:2CKV:GY3E:3NIN:VS7C:3BCB Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:false NGoroutines:57 SystemTime:2023-01-28 19:55:44.995945665 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0128 11:55:45.134609   46693 out.go:177] * Using the docker driver based on existing profile
	I0128 11:55:45.155594   46693 start.go:296] selected driver: docker
	I0128 11:55:45.155626   46693 start.go:857] validating driver "docker" against &{Name:newest-cni-573000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:newest-cni-573000 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequeste
d:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0128 11:55:45.155795   46693 start.go:868] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0128 11:55:45.159644   46693 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0128 11:55:45.300837   46693 info.go:266] docker info: {ID:O4T2:OPZV:SRVN:GAA5:IEOW:TFNR:UPTT:2CKV:GY3E:3NIN:VS7C:3BCB Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:false NGoroutines:57 SystemTime:2023-01-28 19:55:45.209146798 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0128 11:55:45.300994   46693 start_flags.go:936] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0128 11:55:45.301012   46693 cni.go:84] Creating CNI manager for ""
	I0128 11:55:45.301024   46693 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0128 11:55:45.301037   46693 start_flags.go:319] config:
	{Name:newest-cni-573000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:newest-cni-573000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: Networ
kPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/m
inikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0128 11:55:45.344644   46693 out.go:177] * Starting control plane node newest-cni-573000 in cluster newest-cni-573000
	I0128 11:55:45.366462   46693 cache.go:120] Beginning downloading kic base image for docker with docker
	I0128 11:55:45.387608   46693 out.go:177] * Pulling base image ...
	I0128 11:55:45.429673   46693 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0128 11:55:45.429721   46693 image.go:77] Checking for gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 in local docker daemon
	I0128 11:55:45.429773   46693 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15565-24808/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4
	I0128 11:55:45.429796   46693 cache.go:57] Caching tarball of preloaded images
	I0128 11:55:45.429990   46693 preload.go:174] Found /Users/jenkins/minikube-integration/15565-24808/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0128 11:55:45.430012   46693 cache.go:60] Finished verifying existence of preloaded tar for  v1.26.1 on docker
	I0128 11:55:45.431150   46693 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/newest-cni-573000/config.json ...
	I0128 11:55:45.486764   46693 image.go:81] Found gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 in local docker daemon, skipping pull
	I0128 11:55:45.486777   46693 cache.go:143] gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 exists in daemon, skipping load
	I0128 11:55:45.486795   46693 cache.go:193] Successfully downloaded all kic artifacts
	I0128 11:55:45.486833   46693 start.go:364] acquiring machines lock for newest-cni-573000: {Name:mk74b458fad51dc514dc72a8b30af124951b5ffc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0128 11:55:45.486916   46693 start.go:368] acquired machines lock for "newest-cni-573000" in 65.467µs
	I0128 11:55:45.486947   46693 start.go:96] Skipping create...Using existing machine configuration
	I0128 11:55:45.486957   46693 fix.go:55] fixHost starting: 
	I0128 11:55:45.487175   46693 cli_runner.go:164] Run: docker container inspect newest-cni-573000 --format={{.State.Status}}
	I0128 11:55:45.543684   46693 fix.go:103] recreateIfNeeded on newest-cni-573000: state=Stopped err=<nil>
	W0128 11:55:45.543714   46693 fix.go:129] unexpected machine state, will restart: <nil>
	I0128 11:55:45.565699   46693 out.go:177] * Restarting existing docker container for "newest-cni-573000" ...
	I0128 11:55:45.587436   46693 cli_runner.go:164] Run: docker start newest-cni-573000
	I0128 11:55:45.925304   46693 cli_runner.go:164] Run: docker container inspect newest-cni-573000 --format={{.State.Status}}
	I0128 11:55:45.988749   46693 kic.go:426] container "newest-cni-573000" state is running.
	I0128 11:55:45.989719   46693 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-573000
	I0128 11:55:46.066529   46693 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/newest-cni-573000/config.json ...
	I0128 11:55:46.066967   46693 machine.go:88] provisioning docker machine ...
	I0128 11:55:46.066991   46693 ubuntu.go:169] provisioning hostname "newest-cni-573000"
	I0128 11:55:46.067065   46693 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-573000
	I0128 11:55:46.135082   46693 main.go:141] libmachine: Using SSH client type: native
	I0128 11:55:46.135346   46693 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 64053 <nil> <nil>}
	I0128 11:55:46.135361   46693 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-573000 && echo "newest-cni-573000" | sudo tee /etc/hostname
	I0128 11:55:46.277967   46693 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-573000
	
	I0128 11:55:46.278066   46693 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-573000
	I0128 11:55:46.339940   46693 main.go:141] libmachine: Using SSH client type: native
	I0128 11:55:46.340093   46693 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 64053 <nil> <nil>}
	I0128 11:55:46.340107   46693 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-573000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-573000/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-573000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0128 11:55:46.473851   46693 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0128 11:55:46.473870   46693 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15565-24808/.minikube CaCertPath:/Users/jenkins/minikube-integration/15565-24808/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15565-24808/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15565-24808/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15565-24808/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15565-24808/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15565-24808/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15565-24808/.minikube}
	I0128 11:55:46.473893   46693 ubuntu.go:177] setting up certificates
	I0128 11:55:46.473901   46693 provision.go:83] configureAuth start
	I0128 11:55:46.473976   46693 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-573000
	I0128 11:55:46.533280   46693 provision.go:138] copyHostCerts
	I0128 11:55:46.533380   46693 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-24808/.minikube/ca.pem, removing ...
	I0128 11:55:46.533389   46693 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-24808/.minikube/ca.pem
	I0128 11:55:46.533497   46693 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-24808/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15565-24808/.minikube/ca.pem (1082 bytes)
	I0128 11:55:46.533703   46693 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-24808/.minikube/cert.pem, removing ...
	I0128 11:55:46.533710   46693 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-24808/.minikube/cert.pem
	I0128 11:55:46.533776   46693 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-24808/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15565-24808/.minikube/cert.pem (1123 bytes)
	I0128 11:55:46.533943   46693 exec_runner.go:144] found /Users/jenkins/minikube-integration/15565-24808/.minikube/key.pem, removing ...
	I0128 11:55:46.533950   46693 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15565-24808/.minikube/key.pem
	I0128 11:55:46.534021   46693 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15565-24808/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15565-24808/.minikube/key.pem (1675 bytes)
	I0128 11:55:46.534152   46693 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15565-24808/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15565-24808/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15565-24808/.minikube/certs/ca-key.pem org=jenkins.newest-cni-573000 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube newest-cni-573000]
	I0128 11:55:46.572870   46693 provision.go:172] copyRemoteCerts
	I0128 11:55:46.572923   46693 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0128 11:55:46.572977   46693 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-573000
	I0128 11:55:46.630419   46693 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:64053 SSHKeyPath:/Users/jenkins/minikube-integration/15565-24808/.minikube/machines/newest-cni-573000/id_rsa Username:docker}
	I0128 11:55:46.723336   46693 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-24808/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0128 11:55:46.741079   46693 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-24808/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0128 11:55:46.758556   46693 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-24808/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0128 11:55:46.776554   46693 provision.go:86] duration metric: configureAuth took 302.641005ms
	I0128 11:55:46.776578   46693 ubuntu.go:193] setting minikube options for container-runtime
	I0128 11:55:46.776752   46693 config.go:180] Loaded profile config "newest-cni-573000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0128 11:55:46.776822   46693 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-573000
	I0128 11:55:46.834008   46693 main.go:141] libmachine: Using SSH client type: native
	I0128 11:55:46.834160   46693 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 64053 <nil> <nil>}
	I0128 11:55:46.834169   46693 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0128 11:55:46.966925   46693 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0128 11:55:46.966940   46693 ubuntu.go:71] root file system type: overlay
	I0128 11:55:46.967145   46693 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0128 11:55:46.967235   46693 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-573000
	I0128 11:55:47.025909   46693 main.go:141] libmachine: Using SSH client type: native
	I0128 11:55:47.026066   46693 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 64053 <nil> <nil>}
	I0128 11:55:47.026119   46693 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0128 11:55:47.168199   46693 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0128 11:55:47.168312   46693 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-573000
	I0128 11:55:47.226664   46693 main.go:141] libmachine: Using SSH client type: native
	I0128 11:55:47.226817   46693 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x13ec500] 0x13ef680 <nil>  [] 0s} 127.0.0.1 64053 <nil> <nil>}
	I0128 11:55:47.226831   46693 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0128 11:55:47.363667   46693 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0128 11:55:47.363686   46693 machine.go:91] provisioned docker machine in 1.296707075s
	I0128 11:55:47.363695   46693 start.go:300] post-start starting for "newest-cni-573000" (driver="docker")
	I0128 11:55:47.363702   46693 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0128 11:55:47.363792   46693 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0128 11:55:47.363845   46693 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-573000
	I0128 11:55:47.421259   46693 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:64053 SSHKeyPath:/Users/jenkins/minikube-integration/15565-24808/.minikube/machines/newest-cni-573000/id_rsa Username:docker}
	I0128 11:55:47.514928   46693 ssh_runner.go:195] Run: cat /etc/os-release
	I0128 11:55:47.518600   46693 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0128 11:55:47.518624   46693 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0128 11:55:47.518631   46693 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0128 11:55:47.518636   46693 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0128 11:55:47.518643   46693 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15565-24808/.minikube/addons for local assets ...
	I0128 11:55:47.518742   46693 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15565-24808/.minikube/files for local assets ...
	I0128 11:55:47.518895   46693 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15565-24808/.minikube/files/etc/ssl/certs/259822.pem -> 259822.pem in /etc/ssl/certs
	I0128 11:55:47.519080   46693 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0128 11:55:47.526594   46693 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-24808/.minikube/files/etc/ssl/certs/259822.pem --> /etc/ssl/certs/259822.pem (1708 bytes)
	I0128 11:55:47.543871   46693 start.go:303] post-start completed in 180.16471ms
	I0128 11:55:47.543949   46693 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0128 11:55:47.544030   46693 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-573000
	I0128 11:55:47.601768   46693 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:64053 SSHKeyPath:/Users/jenkins/minikube-integration/15565-24808/.minikube/machines/newest-cni-573000/id_rsa Username:docker}
	I0128 11:55:47.692171   46693 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0128 11:55:47.696892   46693 fix.go:57] fixHost completed within 2.209929524s
	I0128 11:55:47.696906   46693 start.go:83] releasing machines lock for "newest-cni-573000", held for 2.209977952s
	I0128 11:55:47.696996   46693 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-573000
	I0128 11:55:47.753177   46693 ssh_runner.go:195] Run: cat /version.json
	I0128 11:55:47.753187   46693 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0128 11:55:47.753252   46693 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-573000
	I0128 11:55:47.753258   46693 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-573000
	I0128 11:55:47.814636   46693 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:64053 SSHKeyPath:/Users/jenkins/minikube-integration/15565-24808/.minikube/machines/newest-cni-573000/id_rsa Username:docker}
	I0128 11:55:47.814811   46693 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:64053 SSHKeyPath:/Users/jenkins/minikube-integration/15565-24808/.minikube/machines/newest-cni-573000/id_rsa Username:docker}
	W0128 11:55:47.906879   46693 out.go:239] ! Image was not built for the current minikube version. To resolve this you can delete and recreate your minikube cluster using the latest images. Expected minikube version: v1.29.0 -> Actual minikube version: v1.29.0-1674856271-15565
	I0128 11:55:47.906959   46693 ssh_runner.go:195] Run: systemctl --version
	I0128 11:55:52.924679   46693 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (5.171460964s)
	W0128 11:55:52.924708   46693 start.go:833] [curl -sS -m 2 https://registry.k8s.io/] failed: curl -sS -m 2 https://registry.k8s.io/: Process exited with status 28
	stdout:
	
	stderr:
	curl: (28) Resolving timed out after 2000 milliseconds
	I0128 11:55:52.924752   46693 ssh_runner.go:235] Completed: systemctl --version: (5.017755945s)
	W0128 11:55:52.924803   46693 out.go:239] ! This container is having trouble accessing https://registry.k8s.io
	W0128 11:55:52.924810   46693 out.go:239] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0128 11:55:52.924834   46693 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0128 11:55:52.929869   46693 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0128 11:55:52.945747   46693 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0128 11:55:52.945865   46693 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0128 11:55:52.953494   46693 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
	I0128 11:55:52.966656   46693 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0128 11:55:52.974615   46693 cni.go:258] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0128 11:55:52.974633   46693 start.go:483] detecting cgroup driver to use...
	I0128 11:55:52.974645   46693 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0128 11:55:52.974730   46693 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0128 11:55:52.988070   46693 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0128 11:55:52.996976   46693 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0128 11:55:53.005536   46693 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0128 11:55:53.005591   46693 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0128 11:55:53.014011   46693 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0128 11:55:53.022673   46693 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0128 11:55:53.031004   46693 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0128 11:55:53.039514   46693 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0128 11:55:53.047444   46693 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0128 11:55:53.055800   46693 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0128 11:55:53.062918   46693 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0128 11:55:53.069927   46693 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0128 11:55:53.138075   46693 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0128 11:55:53.214367   46693 start.go:483] detecting cgroup driver to use...
	I0128 11:55:53.214397   46693 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0128 11:55:53.214476   46693 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0128 11:55:53.227860   46693 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0128 11:55:53.227935   46693 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0128 11:55:53.239668   46693 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0128 11:55:53.256390   46693 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0128 11:55:53.361505   46693 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0128 11:55:53.456602   46693 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0128 11:55:53.456617   46693 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0128 11:55:53.469820   46693 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0128 11:55:53.554340   46693 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0128 11:55:53.795282   46693 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0128 11:55:53.873250   46693 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0128 11:55:53.932290   46693 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0128 11:55:54.002176   46693 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0128 11:55:54.077212   46693 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0128 11:55:54.089017   46693 start.go:530] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0128 11:55:54.089098   46693 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0128 11:55:54.093067   46693 start.go:551] Will wait 60s for crictl version
	I0128 11:55:54.093117   46693 ssh_runner.go:195] Run: which crictl
	I0128 11:55:54.096657   46693 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0128 11:55:54.197428   46693 start.go:567] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.23
	RuntimeApiVersion:  v1alpha2
	I0128 11:55:54.197515   46693 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0128 11:55:54.226602   46693 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0128 11:55:54.299415   46693 out.go:204] * Preparing Kubernetes v1.26.1 on Docker 20.10.23 ...
	I0128 11:55:54.299635   46693 cli_runner.go:164] Run: docker exec -t newest-cni-573000 dig +short host.docker.internal
	I0128 11:55:54.416524   46693 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0128 11:55:54.416641   46693 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0128 11:55:54.421025   46693 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0128 11:55:54.430979   46693 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-573000
	I0128 11:55:54.510813   46693 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0128 11:55:54.532767   46693 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0128 11:55:54.532934   46693 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0128 11:55:54.559249   46693 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	registry.k8s.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0128 11:55:54.559277   46693 docker.go:560] Images already preloaded, skipping extraction
	I0128 11:55:54.559362   46693 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0128 11:55:54.583834   46693 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	registry.k8s.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0128 11:55:54.583858   46693 cache_images.go:84] Images are preloaded, skipping loading
	I0128 11:55:54.583951   46693 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0128 11:55:54.654053   46693 cni.go:84] Creating CNI manager for ""
	I0128 11:55:54.654070   46693 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0128 11:55:54.654091   46693 kubeadm.go:87] Using pod CIDR: 10.42.0.0/16
	I0128 11:55:54.654115   46693 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.26.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-573000 NodeName:newest-cni-573000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArgs:map[
] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0128 11:55:54.654248   46693 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "newest-cni-573000"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.26.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0128 11:55:54.654341   46693 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.26.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --feature-gates=ServerSideApply=true --hostname-override=newest-cni-573000 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.26.1 ClusterName:newest-cni-573000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0128 11:55:54.654419   46693 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.1
	I0128 11:55:54.662165   46693 binaries.go:44] Found k8s binaries, skipping transfer
	I0128 11:55:54.662224   46693 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0128 11:55:54.669754   46693 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (486 bytes)
	I0128 11:55:54.683200   46693 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0128 11:55:54.697119   46693 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2217 bytes)
	I0128 11:55:54.711028   46693 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0128 11:55:54.715716   46693 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0128 11:55:54.726346   46693 certs.go:56] Setting up /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/newest-cni-573000 for IP: 192.168.67.2
	I0128 11:55:54.726380   46693 certs.go:186] acquiring lock for shared ca certs: {Name:mk223e4eab41546e140aa3e3e480564c04fddab4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0128 11:55:54.726565   46693 certs.go:195] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15565-24808/.minikube/ca.key
	I0128 11:55:54.726630   46693 certs.go:195] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15565-24808/.minikube/proxy-client-ca.key
	I0128 11:55:54.726725   46693 certs.go:311] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/newest-cni-573000/client.key
	I0128 11:55:54.726787   46693 certs.go:311] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/newest-cni-573000/apiserver.key.c7fa3a9e
	I0128 11:55:54.726849   46693 certs.go:311] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/newest-cni-573000/proxy-client.key
	I0128 11:55:54.727064   46693 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-24808/.minikube/certs/Users/jenkins/minikube-integration/15565-24808/.minikube/certs/25982.pem (1338 bytes)
	W0128 11:55:54.727102   46693 certs.go:397] ignoring /Users/jenkins/minikube-integration/15565-24808/.minikube/certs/Users/jenkins/minikube-integration/15565-24808/.minikube/certs/25982_empty.pem, impossibly tiny 0 bytes
	I0128 11:55:54.727112   46693 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-24808/.minikube/certs/Users/jenkins/minikube-integration/15565-24808/.minikube/certs/ca-key.pem (1675 bytes)
	I0128 11:55:54.727147   46693 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-24808/.minikube/certs/Users/jenkins/minikube-integration/15565-24808/.minikube/certs/ca.pem (1082 bytes)
	I0128 11:55:54.727194   46693 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-24808/.minikube/certs/Users/jenkins/minikube-integration/15565-24808/.minikube/certs/cert.pem (1123 bytes)
	I0128 11:55:54.727231   46693 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-24808/.minikube/certs/Users/jenkins/minikube-integration/15565-24808/.minikube/certs/key.pem (1675 bytes)
	I0128 11:55:54.727297   46693 certs.go:401] found cert: /Users/jenkins/minikube-integration/15565-24808/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15565-24808/.minikube/files/etc/ssl/certs/259822.pem (1708 bytes)
	I0128 11:55:54.728885   46693 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/newest-cni-573000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0128 11:55:54.747560   46693 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/newest-cni-573000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0128 11:55:54.765959   46693 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/newest-cni-573000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0128 11:55:54.783488   46693 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/newest-cni-573000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0128 11:55:54.801078   46693 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-24808/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0128 11:55:54.818479   46693 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-24808/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0128 11:55:54.835743   46693 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-24808/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0128 11:55:54.852813   46693 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-24808/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0128 11:55:54.870492   46693 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-24808/.minikube/certs/25982.pem --> /usr/share/ca-certificates/25982.pem (1338 bytes)
	I0128 11:55:54.888196   46693 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-24808/.minikube/files/etc/ssl/certs/259822.pem --> /usr/share/ca-certificates/259822.pem (1708 bytes)
	I0128 11:55:54.905571   46693 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15565-24808/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0128 11:55:54.922881   46693 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (772 bytes)
	I0128 11:55:54.935818   46693 ssh_runner.go:195] Run: openssl version
	I0128 11:55:54.941423   46693 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/259822.pem && ln -fs /usr/share/ca-certificates/259822.pem /etc/ssl/certs/259822.pem"
	I0128 11:55:54.949904   46693 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/259822.pem
	I0128 11:55:54.953965   46693 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Jan 28 18:44 /usr/share/ca-certificates/259822.pem
	I0128 11:55:54.954013   46693 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/259822.pem
	I0128 11:55:54.959398   46693 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/259822.pem /etc/ssl/certs/3ec20f2e.0"
	I0128 11:55:54.967068   46693 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0128 11:55:54.975283   46693 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0128 11:55:54.979533   46693 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Jan 28 18:40 /usr/share/ca-certificates/minikubeCA.pem
	I0128 11:55:54.979586   46693 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0128 11:55:54.985247   46693 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0128 11:55:54.993115   46693 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/25982.pem && ln -fs /usr/share/ca-certificates/25982.pem /etc/ssl/certs/25982.pem"
	I0128 11:55:55.001335   46693 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/25982.pem
	I0128 11:55:55.005342   46693 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Jan 28 18:44 /usr/share/ca-certificates/25982.pem
	I0128 11:55:55.005405   46693 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/25982.pem
	I0128 11:55:55.010795   46693 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/25982.pem /etc/ssl/certs/51391683.0"
	I0128 11:55:55.018350   46693 kubeadm.go:401] StartCluster: {Name:newest-cni-573000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:newest-cni-573000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDo
main:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0
CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0128 11:55:55.018459   46693 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0128 11:55:55.042079   46693 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0128 11:55:55.050139   46693 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I0128 11:55:55.050150   46693 kubeadm.go:633] restartCluster start
	I0128 11:55:55.050198   46693 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0128 11:55:55.057387   46693 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:55:55.057471   46693 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-573000
	I0128 11:55:55.115913   46693 kubeconfig.go:135] verify returned: extract IP: "newest-cni-573000" does not appear in /Users/jenkins/minikube-integration/15565-24808/kubeconfig
	I0128 11:55:55.116092   46693 kubeconfig.go:146] "newest-cni-573000" context is missing from /Users/jenkins/minikube-integration/15565-24808/kubeconfig - will repair!
	I0128 11:55:55.116420   46693 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-24808/kubeconfig: {Name:mkd8086baee7daec2b28ba7939ebfa1d8419f5f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0128 11:55:55.117795   46693 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0128 11:55:55.125795   46693 api_server.go:165] Checking apiserver status ...
	I0128 11:55:55.125854   46693 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:55:55.134488   46693 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:55:55.635539   46693 api_server.go:165] Checking apiserver status ...
	I0128 11:55:55.635696   46693 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:55:55.646652   46693 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:55:56.136132   46693 api_server.go:165] Checking apiserver status ...
	I0128 11:55:56.136287   46693 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:55:56.147517   46693 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:55:56.636094   46693 api_server.go:165] Checking apiserver status ...
	I0128 11:55:56.636244   46693 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:55:56.647064   46693 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:55:57.135062   46693 api_server.go:165] Checking apiserver status ...
	I0128 11:55:57.135302   46693 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:55:57.146541   46693 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:55:57.635479   46693 api_server.go:165] Checking apiserver status ...
	I0128 11:55:57.635709   46693 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:55:57.646754   46693 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:55:58.135695   46693 api_server.go:165] Checking apiserver status ...
	I0128 11:55:58.135944   46693 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:55:58.146927   46693 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:55:58.636660   46693 api_server.go:165] Checking apiserver status ...
	I0128 11:55:58.636873   46693 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:55:58.648345   46693 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:55:59.135169   46693 api_server.go:165] Checking apiserver status ...
	I0128 11:55:59.135413   46693 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:55:59.146342   46693 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:55:59.636641   46693 api_server.go:165] Checking apiserver status ...
	I0128 11:55:59.636797   46693 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:55:59.647979   46693 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:56:00.135889   46693 api_server.go:165] Checking apiserver status ...
	I0128 11:56:00.136097   46693 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:56:00.147306   46693 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:56:00.634871   46693 api_server.go:165] Checking apiserver status ...
	I0128 11:56:00.634976   46693 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:56:00.646279   46693 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:56:01.134887   46693 api_server.go:165] Checking apiserver status ...
	I0128 11:56:01.135111   46693 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:56:01.146052   46693 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:56:01.635199   46693 api_server.go:165] Checking apiserver status ...
	I0128 11:56:01.635370   46693 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:56:01.646660   46693 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:56:02.134724   46693 api_server.go:165] Checking apiserver status ...
	I0128 11:56:02.134833   46693 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:56:02.145599   46693 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:56:02.635168   46693 api_server.go:165] Checking apiserver status ...
	I0128 11:56:02.635282   46693 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:56:02.646336   46693 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:56:03.135839   46693 api_server.go:165] Checking apiserver status ...
	I0128 11:56:03.136054   46693 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:56:03.147013   46693 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:56:03.635042   46693 api_server.go:165] Checking apiserver status ...
	I0128 11:56:03.635154   46693 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:56:03.645958   46693 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:56:04.136365   46693 api_server.go:165] Checking apiserver status ...
	I0128 11:56:04.136611   46693 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:56:04.147718   46693 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:56:04.635087   46693 api_server.go:165] Checking apiserver status ...
	I0128 11:56:04.635299   46693 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:56:04.646304   46693 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:56:05.135467   46693 api_server.go:165] Checking apiserver status ...
	I0128 11:56:05.135581   46693 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:56:05.146746   46693 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:56:05.146755   46693 api_server.go:165] Checking apiserver status ...
	I0128 11:56:05.146809   46693 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0128 11:56:05.155384   46693 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:56:05.155397   46693 kubeadm.go:608] needs reconfigure: apiserver error: timed out waiting for the condition
	I0128 11:56:05.155408   46693 kubeadm.go:1120] stopping kube-system containers ...
	I0128 11:56:05.155483   46693 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0128 11:56:05.181177   46693 docker.go:456] Stopping containers: [1f3193bbba80 67ed8f37b447 4d0967b9fef3 454d3e049680 fcceea13b38b be2b41fb4917 9de1e876c481 9bbf0acf0071 1fb4e16fb3fd f55164823614 de9db767baee 919b5af00c14 bff4a689b514 103ceff9eff9]
	I0128 11:56:05.181268   46693 ssh_runner.go:195] Run: docker stop 1f3193bbba80 67ed8f37b447 4d0967b9fef3 454d3e049680 fcceea13b38b be2b41fb4917 9de1e876c481 9bbf0acf0071 1fb4e16fb3fd f55164823614 de9db767baee 919b5af00c14 bff4a689b514 103ceff9eff9
	I0128 11:56:05.206095   46693 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0128 11:56:05.217684   46693 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0128 11:56:05.226072   46693 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Jan 28 19:55 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Jan 28 19:55 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2007 Jan 28 19:55 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Jan 28 19:55 /etc/kubernetes/scheduler.conf
	
	I0128 11:56:05.226141   46693 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0128 11:56:05.234091   46693 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0128 11:56:05.241956   46693 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0128 11:56:05.249879   46693 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:56:05.249950   46693 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0128 11:56:05.257712   46693 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0128 11:56:05.265212   46693 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:56:05.265266   46693 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0128 11:56:05.272639   46693 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0128 11:56:05.280336   46693 kubeadm.go:710] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0128 11:56:05.280349   46693 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0128 11:56:05.334297   46693 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0128 11:56:06.258654   46693 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0128 11:56:06.389862   46693 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0128 11:56:06.447070   46693 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0128 11:56:06.549631   46693 api_server.go:51] waiting for apiserver process to appear ...
	I0128 11:56:06.549702   46693 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:56:07.059515   46693 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:56:07.559536   46693 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:56:07.626794   46693 api_server.go:71] duration metric: took 1.077159259s to wait for apiserver process to appear ...
	I0128 11:56:07.626816   46693 api_server.go:87] waiting for apiserver healthz status ...
	I0128 11:56:07.626836   46693 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:64052/healthz ...
	I0128 11:56:07.628000   46693 api_server.go:268] stopped: https://127.0.0.1:64052/healthz: Get "https://127.0.0.1:64052/healthz": EOF
	I0128 11:56:08.128464   46693 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:64052/healthz ...
	I0128 11:56:10.584463   46693 api_server.go:278] https://127.0.0.1:64052/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0128 11:56:10.584481   46693 api_server.go:102] status: https://127.0.0.1:64052/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0128 11:56:10.630107   46693 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:64052/healthz ...
	I0128 11:56:10.637948   46693 api_server.go:278] https://127.0.0.1:64052/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0128 11:56:10.637978   46693 api_server.go:102] status: https://127.0.0.1:64052/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0128 11:56:11.129553   46693 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:64052/healthz ...
	I0128 11:56:11.136072   46693 api_server.go:278] https://127.0.0.1:64052/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0128 11:56:11.136087   46693 api_server.go:102] status: https://127.0.0.1:64052/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0128 11:56:11.628129   46693 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:64052/healthz ...
	I0128 11:56:11.633705   46693 api_server.go:278] https://127.0.0.1:64052/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0128 11:56:11.633722   46693 api_server.go:102] status: https://127.0.0.1:64052/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0128 11:56:12.128078   46693 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:64052/healthz ...
	I0128 11:56:12.132878   46693 api_server.go:278] https://127.0.0.1:64052/healthz returned 200:
	ok
	I0128 11:56:12.139560   46693 api_server.go:140] control plane version: v1.26.1
	I0128 11:56:12.139580   46693 api_server.go:130] duration metric: took 4.512746305s to wait for apiserver health ...
	I0128 11:56:12.139587   46693 cni.go:84] Creating CNI manager for ""
	I0128 11:56:12.139605   46693 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0128 11:56:12.179626   46693 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0128 11:56:12.217774   46693 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0128 11:56:12.229002   46693 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0128 11:56:12.244621   46693 system_pods.go:43] waiting for kube-system pods to appear ...
	I0128 11:56:12.252529   46693 system_pods.go:59] 8 kube-system pods found
	I0128 11:56:12.252545   46693 system_pods.go:61] "coredns-787d4945fb-f565f" [3bc748c7-f81d-4a48-bdf6-1f0c07c3f810] Running
	I0128 11:56:12.252551   46693 system_pods.go:61] "etcd-newest-cni-573000" [4b3ec313-3858-4d74-b3e0-056becc64aea] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0128 11:56:12.252556   46693 system_pods.go:61] "kube-apiserver-newest-cni-573000" [3688b6cb-939e-4478-a899-27c65613b1a8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0128 11:56:12.252561   46693 system_pods.go:61] "kube-controller-manager-newest-cni-573000" [f3a8093f-f907-4918-bcf6-54cc3a3f578b] Running
	I0128 11:56:12.252564   46693 system_pods.go:61] "kube-proxy-bc256" [e9a1c26a-838f-4673-abbe-4ad2f59eacad] Running
	I0128 11:56:12.252568   46693 system_pods.go:61] "kube-scheduler-newest-cni-573000" [c215b6ec-c498-474a-a2a3-1979d9aa6715] Running
	I0128 11:56:12.252572   46693 system_pods.go:61] "metrics-server-7997d45854-c7fg8" [8a0d4760-1341-4b38-b663-68c393ab3d60] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0128 11:56:12.252575   46693 system_pods.go:61] "storage-provisioner" [87ae38f1-5c6b-43aa-aa1b-219b8c2d65f9] Running
	I0128 11:56:12.252579   46693 system_pods.go:74] duration metric: took 7.946058ms to wait for pod list to return data ...
	I0128 11:56:12.252584   46693 node_conditions.go:102] verifying NodePressure condition ...
	I0128 11:56:12.255770   46693 node_conditions.go:122] node storage ephemeral capacity is 107016164Ki
	I0128 11:56:12.255787   46693 node_conditions.go:123] node cpu capacity is 6
	I0128 11:56:12.255797   46693 node_conditions.go:105] duration metric: took 3.208691ms to run NodePressure ...
	I0128 11:56:12.255808   46693 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0128 11:56:12.615594   46693 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0128 11:56:12.627388   46693 ops.go:34] apiserver oom_adj: -16
	I0128 11:56:12.627408   46693 kubeadm.go:637] restartCluster took 17.57720799s
	I0128 11:56:12.627418   46693 kubeadm.go:403] StartCluster complete in 17.609032348s
	I0128 11:56:12.627428   46693 settings.go:142] acquiring lock: {Name:mkb81e67ff3b64beaca5a3176f054172b211c785 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0128 11:56:12.627508   46693 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/15565-24808/kubeconfig
	I0128 11:56:12.628078   46693 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-24808/kubeconfig: {Name:mkd8086baee7daec2b28ba7939ebfa1d8419f5f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0128 11:56:12.628359   46693 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0128 11:56:12.628406   46693 addons.go:489] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0128 11:56:12.628535   46693 addons.go:65] Setting storage-provisioner=true in profile "newest-cni-573000"
	I0128 11:56:12.628553   46693 addons.go:227] Setting addon storage-provisioner=true in "newest-cni-573000"
	I0128 11:56:12.628554   46693 config.go:180] Loaded profile config "newest-cni-573000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	W0128 11:56:12.628562   46693 addons.go:236] addon storage-provisioner should already be in state true
	I0128 11:56:12.628546   46693 addons.go:65] Setting default-storageclass=true in profile "newest-cni-573000"
	I0128 11:56:12.628570   46693 addons.go:65] Setting metrics-server=true in profile "newest-cni-573000"
	I0128 11:56:12.628611   46693 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-573000"
	I0128 11:56:12.628619   46693 addons.go:227] Setting addon metrics-server=true in "newest-cni-573000"
	I0128 11:56:12.628605   46693 addons.go:65] Setting dashboard=true in profile "newest-cni-573000"
	W0128 11:56:12.628633   46693 addons.go:236] addon metrics-server should already be in state true
	I0128 11:56:12.628643   46693 addons.go:227] Setting addon dashboard=true in "newest-cni-573000"
	W0128 11:56:12.628653   46693 addons.go:236] addon dashboard should already be in state true
	I0128 11:56:12.628673   46693 host.go:66] Checking if "newest-cni-573000" exists ...
	I0128 11:56:12.628685   46693 host.go:66] Checking if "newest-cni-573000" exists ...
	I0128 11:56:12.628696   46693 host.go:66] Checking if "newest-cni-573000" exists ...
	I0128 11:56:12.629066   46693 cli_runner.go:164] Run: docker container inspect newest-cni-573000 --format={{.State.Status}}
	I0128 11:56:12.629173   46693 cli_runner.go:164] Run: docker container inspect newest-cni-573000 --format={{.State.Status}}
	I0128 11:56:12.629227   46693 cli_runner.go:164] Run: docker container inspect newest-cni-573000 --format={{.State.Status}}
	I0128 11:56:12.629921   46693 cli_runner.go:164] Run: docker container inspect newest-cni-573000 --format={{.State.Status}}
	I0128 11:56:12.636687   46693 kapi.go:248] "coredns" deployment in "kube-system" namespace and "newest-cni-573000" context rescaled to 1 replicas
	I0128 11:56:12.636731   46693 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0128 11:56:12.660066   46693 out.go:177] * Verifying Kubernetes components...
	I0128 11:56:12.700849   46693 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0128 11:56:12.733909   46693 addons.go:227] Setting addon default-storageclass=true in "newest-cni-573000"
	I0128 11:56:12.787823   46693 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0128 11:56:12.746138   46693 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0128 11:56:12.766964   46693 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	W0128 11:56:12.787842   46693 addons.go:236] addon default-storageclass should already be in state true
	I0128 11:56:12.809163   46693 host.go:66] Checking if "newest-cni-573000" exists ...
	I0128 11:56:12.830097   46693 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0128 11:56:12.888144   46693 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0128 11:56:12.888192   46693 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0128 11:56:12.926134   46693 addons.go:419] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0128 11:56:12.834939   46693 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-573000
	I0128 11:56:12.850810   46693 addons.go:419] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0128 11:56:12.926194   46693 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0128 11:56:12.851142   46693 cli_runner.go:164] Run: docker container inspect newest-cni-573000 --format={{.State.Status}}
	I0128 11:56:12.926215   46693 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-573000
	I0128 11:56:12.834915   46693 start.go:892] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0128 11:56:12.926149   46693 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0128 11:56:12.926291   46693 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-573000
	I0128 11:56:12.926357   46693 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-573000
	I0128 11:56:13.029903   46693 api_server.go:51] waiting for apiserver process to appear ...
	I0128 11:56:13.029904   46693 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:64053 SSHKeyPath:/Users/jenkins/minikube-integration/15565-24808/.minikube/machines/newest-cni-573000/id_rsa Username:docker}
	I0128 11:56:13.030027   46693 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:56:13.030442   46693 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:64053 SSHKeyPath:/Users/jenkins/minikube-integration/15565-24808/.minikube/machines/newest-cni-573000/id_rsa Username:docker}
	I0128 11:56:13.031336   46693 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
	I0128 11:56:13.031349   46693 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0128 11:56:13.031440   46693 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-573000
	I0128 11:56:13.034579   46693 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:64053 SSHKeyPath:/Users/jenkins/minikube-integration/15565-24808/.minikube/machines/newest-cni-573000/id_rsa Username:docker}
	I0128 11:56:13.056544   46693 api_server.go:71] duration metric: took 419.777ms to wait for apiserver process to appear ...
	I0128 11:56:13.056570   46693 api_server.go:87] waiting for apiserver healthz status ...
	I0128 11:56:13.056588   46693 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:64052/healthz ...
	I0128 11:56:13.062926   46693 api_server.go:278] https://127.0.0.1:64052/healthz returned 200:
	ok
	I0128 11:56:13.065543   46693 api_server.go:140] control plane version: v1.26.1
	I0128 11:56:13.065555   46693 api_server.go:130] duration metric: took 8.97953ms to wait for apiserver health ...
	I0128 11:56:13.065561   46693 system_pods.go:43] waiting for kube-system pods to appear ...
	I0128 11:56:13.101221   46693 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:64053 SSHKeyPath:/Users/jenkins/minikube-integration/15565-24808/.minikube/machines/newest-cni-573000/id_rsa Username:docker}
	I0128 11:56:13.119426   46693 system_pods.go:59] 8 kube-system pods found
	I0128 11:56:13.119452   46693 system_pods.go:61] "coredns-787d4945fb-f565f" [3bc748c7-f81d-4a48-bdf6-1f0c07c3f810] Running
	I0128 11:56:13.119462   46693 system_pods.go:61] "etcd-newest-cni-573000" [4b3ec313-3858-4d74-b3e0-056becc64aea] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0128 11:56:13.119472   46693 system_pods.go:61] "kube-apiserver-newest-cni-573000" [3688b6cb-939e-4478-a899-27c65613b1a8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0128 11:56:13.119485   46693 system_pods.go:61] "kube-controller-manager-newest-cni-573000" [f3a8093f-f907-4918-bcf6-54cc3a3f578b] Running
	I0128 11:56:13.119490   46693 system_pods.go:61] "kube-proxy-bc256" [e9a1c26a-838f-4673-abbe-4ad2f59eacad] Running
	I0128 11:56:13.119499   46693 system_pods.go:61] "kube-scheduler-newest-cni-573000" [c215b6ec-c498-474a-a2a3-1979d9aa6715] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0128 11:56:13.119507   46693 system_pods.go:61] "metrics-server-7997d45854-c7fg8" [8a0d4760-1341-4b38-b663-68c393ab3d60] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0128 11:56:13.119513   46693 system_pods.go:61] "storage-provisioner" [87ae38f1-5c6b-43aa-aa1b-219b8c2d65f9] Running
	I0128 11:56:13.119519   46693 system_pods.go:74] duration metric: took 53.953713ms to wait for pod list to return data ...
	I0128 11:56:13.119541   46693 default_sa.go:34] waiting for default service account to be created ...
	I0128 11:56:13.122196   46693 default_sa.go:45] found service account: "default"
	I0128 11:56:13.122211   46693 default_sa.go:55] duration metric: took 2.661751ms for default service account to be created ...
	I0128 11:56:13.122221   46693 kubeadm.go:578] duration metric: took 485.460927ms to wait for : map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] ...
	I0128 11:56:13.122235   46693 node_conditions.go:102] verifying NodePressure condition ...
	I0128 11:56:13.126083   46693 node_conditions.go:122] node storage ephemeral capacity is 107016164Ki
	I0128 11:56:13.126099   46693 node_conditions.go:123] node cpu capacity is 6
	I0128 11:56:13.126106   46693 node_conditions.go:105] duration metric: took 3.866754ms to run NodePressure ...
	I0128 11:56:13.126114   46693 start.go:228] waiting for startup goroutines ...
	I0128 11:56:13.228988   46693 addons.go:419] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0128 11:56:13.229003   46693 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1820 bytes)
	I0128 11:56:13.231479   46693 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0128 11:56:13.232511   46693 addons.go:419] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0128 11:56:13.232524   46693 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0128 11:56:13.249867   46693 addons.go:419] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0128 11:56:13.249884   46693 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0128 11:56:13.249930   46693 addons.go:419] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0128 11:56:13.249940   46693 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0128 11:56:13.333018   46693 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0128 11:56:13.334189   46693 addons.go:419] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0128 11:56:13.334201   46693 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0128 11:56:13.337376   46693 addons.go:419] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0128 11:56:13.337395   46693 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0128 11:56:13.420951   46693 addons.go:419] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0128 11:56:13.420969   46693 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0128 11:56:13.425699   46693 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0128 11:56:13.446061   46693 addons.go:419] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0128 11:56:13.446078   46693 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0128 11:56:13.534682   46693 addons.go:419] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0128 11:56:13.534698   46693 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0128 11:56:13.613813   46693 addons.go:419] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0128 11:56:13.613848   46693 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0128 11:56:13.637617   46693 addons.go:419] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0128 11:56:13.637641   46693 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0128 11:56:13.733422   46693 addons.go:419] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0128 11:56:13.733443   46693 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0128 11:56:13.822378   46693 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0128 11:56:14.620857   46693 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.287799464s)
	I0128 11:56:14.620868   46693 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.38935244s)
	I0128 11:56:14.632478   46693 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.206742362s)
	I0128 11:56:14.632507   46693 addons.go:457] Verifying addon metrics-server=true in "newest-cni-573000"
	I0128 11:56:14.768900   46693 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-573000 addons enable metrics-server	
	
	
	I0128 11:56:14.789797   46693 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0128 11:56:14.832912   46693 addons.go:492] enable addons completed in 2.204489746s: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0128 11:56:14.832938   46693 start.go:233] waiting for cluster config update ...
	I0128 11:56:14.832955   46693 start.go:240] writing updated cluster config ...
	I0128 11:56:14.833328   46693 ssh_runner.go:195] Run: rm -f paused
	I0128 11:56:14.872692   46693 start.go:555] kubectl: 1.25.4, cluster: 1.26.1 (minor skew: 1)
	I0128 11:56:14.893961   46693 out.go:177] * Done! kubectl is now configured to use "newest-cni-573000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Sat 2023-01-28 19:38:32 UTC, end at Sat 2023-01-28 20:05:35 UTC. --
	Jan 28 19:38:35 old-k8s-version-182000 systemd[1]: Started Docker Application Container Engine.
	Jan 28 19:38:35 old-k8s-version-182000 systemd[1]: Stopping Docker Application Container Engine...
	Jan 28 19:38:35 old-k8s-version-182000 dockerd[436]: time="2023-01-28T19:38:35.469071572Z" level=info msg="Processing signal 'terminated'"
	Jan 28 19:38:35 old-k8s-version-182000 dockerd[436]: time="2023-01-28T19:38:35.469910601Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jan 28 19:38:35 old-k8s-version-182000 dockerd[436]: time="2023-01-28T19:38:35.470150136Z" level=info msg="Daemon shutdown complete"
	Jan 28 19:38:35 old-k8s-version-182000 systemd[1]: docker.service: Succeeded.
	Jan 28 19:38:35 old-k8s-version-182000 systemd[1]: Stopped Docker Application Container Engine.
	Jan 28 19:38:35 old-k8s-version-182000 systemd[1]: Starting Docker Application Container Engine...
	Jan 28 19:38:35 old-k8s-version-182000 dockerd[621]: time="2023-01-28T19:38:35.523375038Z" level=info msg="Starting up"
	Jan 28 19:38:35 old-k8s-version-182000 dockerd[621]: time="2023-01-28T19:38:35.525046742Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jan 28 19:38:35 old-k8s-version-182000 dockerd[621]: time="2023-01-28T19:38:35.525083922Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jan 28 19:38:35 old-k8s-version-182000 dockerd[621]: time="2023-01-28T19:38:35.525106137Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Jan 28 19:38:35 old-k8s-version-182000 dockerd[621]: time="2023-01-28T19:38:35.525113757Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jan 28 19:38:35 old-k8s-version-182000 dockerd[621]: time="2023-01-28T19:38:35.526268026Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Jan 28 19:38:35 old-k8s-version-182000 dockerd[621]: time="2023-01-28T19:38:35.526308462Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Jan 28 19:38:35 old-k8s-version-182000 dockerd[621]: time="2023-01-28T19:38:35.526321446Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Jan 28 19:38:35 old-k8s-version-182000 dockerd[621]: time="2023-01-28T19:38:35.526328970Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Jan 28 19:38:35 old-k8s-version-182000 dockerd[621]: time="2023-01-28T19:38:35.533397728Z" level=info msg="Loading containers: start."
	Jan 28 19:38:35 old-k8s-version-182000 dockerd[621]: time="2023-01-28T19:38:35.611043216Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jan 28 19:38:35 old-k8s-version-182000 dockerd[621]: time="2023-01-28T19:38:35.644702151Z" level=info msg="Loading containers: done."
	Jan 28 19:38:35 old-k8s-version-182000 dockerd[621]: time="2023-01-28T19:38:35.653097814Z" level=info msg="Docker daemon" commit=6051f14 graphdriver(s)=overlay2 version=20.10.23
	Jan 28 19:38:35 old-k8s-version-182000 dockerd[621]: time="2023-01-28T19:38:35.653195626Z" level=info msg="Daemon has completed initialization"
	Jan 28 19:38:35 old-k8s-version-182000 systemd[1]: Started Docker Application Container Engine.
	Jan 28 19:38:35 old-k8s-version-182000 dockerd[621]: time="2023-01-28T19:38:35.675039977Z" level=info msg="API listen on [::]:2376"
	Jan 28 19:38:35 old-k8s-version-182000 dockerd[621]: time="2023-01-28T19:38:35.681266325Z" level=info msg="API listen on /var/run/docker.sock"
	
	* 
	* ==> container status <==
	* time="2023-01-28T20:05:37Z" level=fatal msg="connect: connect endpoint 'unix:///var/run/dockershim.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> kernel <==
	*  20:05:37 up  3:04,  0 users,  load average: 0.00, 0.19, 0.69
	Linux old-k8s-version-182000 5.15.49-linuxkit #1 SMP Tue Sep 13 07:51:46 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.5 LTS"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Sat 2023-01-28 19:38:32 UTC, end at Sat 2023-01-28 20:05:37 UTC. --
	Jan 28 20:05:35 old-k8s-version-182000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jan 28 20:05:36 old-k8s-version-182000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1667.
	Jan 28 20:05:36 old-k8s-version-182000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jan 28 20:05:36 old-k8s-version-182000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jan 28 20:05:36 old-k8s-version-182000 kubelet[34775]: I0128 20:05:36.319516   34775 server.go:410] Version: v1.16.0
	Jan 28 20:05:36 old-k8s-version-182000 kubelet[34775]: I0128 20:05:36.319974   34775 plugins.go:100] No cloud provider specified.
	Jan 28 20:05:36 old-k8s-version-182000 kubelet[34775]: I0128 20:05:36.319986   34775 server.go:773] Client rotation is on, will bootstrap in background
	Jan 28 20:05:36 old-k8s-version-182000 kubelet[34775]: I0128 20:05:36.321722   34775 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jan 28 20:05:36 old-k8s-version-182000 kubelet[34775]: W0128 20:05:36.322413   34775 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Jan 28 20:05:36 old-k8s-version-182000 kubelet[34775]: W0128 20:05:36.322484   34775 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Jan 28 20:05:36 old-k8s-version-182000 kubelet[34775]: F0128 20:05:36.322508   34775 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Jan 28 20:05:36 old-k8s-version-182000 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jan 28 20:05:36 old-k8s-version-182000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jan 28 20:05:36 old-k8s-version-182000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1668.
	Jan 28 20:05:36 old-k8s-version-182000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jan 28 20:05:36 old-k8s-version-182000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jan 28 20:05:37 old-k8s-version-182000 kubelet[34791]: I0128 20:05:37.072569   34791 server.go:410] Version: v1.16.0
	Jan 28 20:05:37 old-k8s-version-182000 kubelet[34791]: I0128 20:05:37.072818   34791 plugins.go:100] No cloud provider specified.
	Jan 28 20:05:37 old-k8s-version-182000 kubelet[34791]: I0128 20:05:37.072833   34791 server.go:773] Client rotation is on, will bootstrap in background
	Jan 28 20:05:37 old-k8s-version-182000 kubelet[34791]: I0128 20:05:37.074507   34791 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jan 28 20:05:37 old-k8s-version-182000 kubelet[34791]: W0128 20:05:37.075204   34791 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Jan 28 20:05:37 old-k8s-version-182000 kubelet[34791]: W0128 20:05:37.075273   34791 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Jan 28 20:05:37 old-k8s-version-182000 kubelet[34791]: F0128 20:05:37.075302   34791 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Jan 28 20:05:37 old-k8s-version-182000 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jan 28 20:05:37 old-k8s-version-182000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0128 12:05:37.457927   47578 logs.go:193] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-182000 -n old-k8s-version-182000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-182000 -n old-k8s-version-182000: exit status 2 (400.473071ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-182000" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (554.76s)

                                                
                                    

Test pass (271/306)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 12.77
4 TestDownloadOnly/v1.16.0/preload-exists 0
7 TestDownloadOnly/v1.16.0/kubectl 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.31
10 TestDownloadOnly/v1.26.1/json-events 7.64
11 TestDownloadOnly/v1.26.1/preload-exists 0
14 TestDownloadOnly/v1.26.1/kubectl 0
15 TestDownloadOnly/v1.26.1/LogsDuration 0.3
16 TestDownloadOnly/DeleteAll 0.67
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.4
18 TestDownloadOnlyKic 12.34
19 TestBinaryMirror 1.76
20 TestOffline 53.59
22 TestAddons/Setup 143.68
26 TestAddons/parallel/MetricsServer 5.84
27 TestAddons/parallel/HelmTiller 17.16
29 TestAddons/parallel/CSI 42.59
30 TestAddons/parallel/Headlamp 12.59
31 TestAddons/parallel/CloudSpanner 5.45
34 TestAddons/serial/GCPAuth/Namespaces 2.44
35 TestAddons/StoppedEnableDisable 11.53
36 TestCertOptions 44.82
37 TestCertExpiration 249.59
38 TestDockerFlags 34.88
39 TestForceSystemdFlag 37.93
40 TestForceSystemdEnv 37.27
42 TestHyperKitDriverInstallOrUpdate 8.87
46 TestErrorSpam/start 2.48
47 TestErrorSpam/status 1.27
48 TestErrorSpam/pause 1.8
49 TestErrorSpam/unpause 1.9
50 TestErrorSpam/stop 2.83
53 TestFunctional/serial/CopySyncFile 0
54 TestFunctional/serial/StartWithProxy 46.24
55 TestFunctional/serial/AuditLog 0
56 TestFunctional/serial/SoftStart 40.96
57 TestFunctional/serial/KubeContext 0.04
58 TestFunctional/serial/KubectlGetPods 0.09
61 TestFunctional/serial/CacheCmd/cache/add_remote 6.99
62 TestFunctional/serial/CacheCmd/cache/add_local 1.68
63 TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 0.08
64 TestFunctional/serial/CacheCmd/cache/list 0.08
65 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.43
66 TestFunctional/serial/CacheCmd/cache/cache_reload 2.81
67 TestFunctional/serial/CacheCmd/cache/delete 0.17
68 TestFunctional/serial/MinikubeKubectlCmd 0.55
69 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.69
70 TestFunctional/serial/ExtraConfig 43.86
71 TestFunctional/serial/ComponentHealth 0.05
72 TestFunctional/serial/LogsCmd 3.13
73 TestFunctional/serial/LogsFileCmd 3.18
75 TestFunctional/parallel/ConfigCmd 0.5
76 TestFunctional/parallel/DashboardCmd 8.83
77 TestFunctional/parallel/DryRun 1.7
78 TestFunctional/parallel/InternationalLanguage 0.74
79 TestFunctional/parallel/StatusCmd 1.76
82 TestFunctional/parallel/ServiceCmd 14.26
84 TestFunctional/parallel/AddonsCmd 0.29
85 TestFunctional/parallel/PersistentVolumeClaim 28.95
87 TestFunctional/parallel/SSHCmd 1.14
88 TestFunctional/parallel/CpCmd 1.64
89 TestFunctional/parallel/MySQL 25.05
90 TestFunctional/parallel/FileSync 0.45
91 TestFunctional/parallel/CertSync 2.64
95 TestFunctional/parallel/NodeLabels 0.05
97 TestFunctional/parallel/NonActiveRuntimeDisabled 0.43
100 TestFunctional/parallel/DockerEnv/bash 1.7
101 TestFunctional/parallel/UpdateContextCmd/no_changes 0.33
102 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.49
103 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.32
105 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
107 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 18.3
108 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.05
109 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
113 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
114 TestFunctional/parallel/ProfileCmd/profile_not_create 0.54
115 TestFunctional/parallel/ProfileCmd/profile_list 0.51
116 TestFunctional/parallel/ProfileCmd/profile_json_output 0.58
117 TestFunctional/parallel/MountCmd/any-port 10.32
118 TestFunctional/parallel/MountCmd/specific-port 2.75
119 TestFunctional/parallel/Version/short 0.13
120 TestFunctional/parallel/Version/components 0.72
121 TestFunctional/parallel/ImageCommands/ImageListShort 0.37
122 TestFunctional/parallel/ImageCommands/ImageListTable 0.34
123 TestFunctional/parallel/ImageCommands/ImageListJson 0.39
124 TestFunctional/parallel/ImageCommands/ImageListYaml 0.39
125 TestFunctional/parallel/ImageCommands/ImageBuild 3.3
126 TestFunctional/parallel/ImageCommands/Setup 6.74
127 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 3.32
128 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.42
129 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 5.21
130 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.27
131 TestFunctional/parallel/ImageCommands/ImageRemove 0.74
132 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.69
133 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 2.52
134 TestFunctional/delete_addon-resizer_images 0.15
135 TestFunctional/delete_my-image_image 0.06
136 TestFunctional/delete_minikube_cached_images 0.06
140 TestImageBuild/serial/NormalBuild 2.19
141 TestImageBuild/serial/BuildWithBuildArg 0.92
142 TestImageBuild/serial/BuildWithDockerIgnore 0.48
143 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.42
153 TestJSONOutput/start/Command 47.97
154 TestJSONOutput/start/Audit 0
156 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
157 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
159 TestJSONOutput/pause/Command 1.05
160 TestJSONOutput/pause/Audit 0
162 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
163 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
165 TestJSONOutput/unpause/Command 0.61
166 TestJSONOutput/unpause/Audit 0
168 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
169 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
171 TestJSONOutput/stop/Command 5.85
172 TestJSONOutput/stop/Audit 0
174 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
175 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
176 TestErrorJSONOutput 0.76
178 TestKicCustomNetwork/create_custom_network 35.33
179 TestKicCustomNetwork/use_default_bridge_network 42.38
180 TestKicExistingNetwork 31.99
181 TestKicCustomSubnet 42.24
182 TestKicStaticIP 35.38
183 TestMainNoArgs 0.08
184 TestMinikubeProfile 74.13
187 TestMountStart/serial/StartWithMountFirst 8.23
188 TestMountStart/serial/VerifyMountFirst 0.43
189 TestMountStart/serial/StartWithMountSecond 8.02
190 TestMountStart/serial/VerifyMountSecond 0.41
191 TestMountStart/serial/DeleteFirst 2.14
192 TestMountStart/serial/VerifyMountPostDelete 0.4
193 TestMountStart/serial/Stop 1.59
194 TestMountStart/serial/RestartStopped 6.16
195 TestMountStart/serial/VerifyMountPostStop 0.4
198 TestMultiNode/serial/FreshStart2Nodes 77.23
199 TestMultiNode/serial/DeployApp2Nodes 9.69
200 TestMultiNode/serial/PingHostFrom2Pods 0.91
201 TestMultiNode/serial/AddNode 23.65
202 TestMultiNode/serial/ProfileList 0.48
203 TestMultiNode/serial/CopyFile 14.97
204 TestMultiNode/serial/StopNode 3.04
205 TestMultiNode/serial/StartAfterStop 10.63
206 TestMultiNode/serial/RestartKeepsNodes 87.12
207 TestMultiNode/serial/DeleteNode 6.18
208 TestMultiNode/serial/StopMultiNode 21.95
209 TestMultiNode/serial/RestartMultiNode 69.29
210 TestMultiNode/serial/ValidateNameConflict 34.29
214 TestPreload 135.51
216 TestScheduledStopUnix 109.28
217 TestSkaffold 64.49
219 TestInsufficientStorage 14.78
235 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 8.68
237 TestStoppedBinaryUpgrade/Setup 0.96
239 TestStoppedBinaryUpgrade/MinikubeLogs 3.59
241 TestPause/serial/Start 45.45
242 TestPause/serial/SecondStartNoReconfiguration 50.91
243 TestPause/serial/Pause 0.68
244 TestPause/serial/VerifyStatus 0.42
245 TestPause/serial/Unpause 0.64
246 TestPause/serial/PauseAgain 0.8
247 TestPause/serial/DeletePaused 2.65
248 TestPause/serial/VerifyDeletedResources 0.57
257 TestNoKubernetes/serial/StartNoK8sWithVersion 0.4
258 TestNoKubernetes/serial/StartWithK8s 35.4
259 TestNoKubernetes/serial/StartWithStopK8s 18.65
260 TestNoKubernetes/serial/Start 7.18
261 TestNoKubernetes/serial/VerifyK8sNotRunning 0.39
262 TestNoKubernetes/serial/ProfileList 32.9
263 TestNoKubernetes/serial/Stop 1.6
264 TestNoKubernetes/serial/StartNoArgs 4.91
265 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.39
266 TestNetworkPlugins/group/auto/Start 57.21
267 TestNetworkPlugins/group/auto/KubeletFlags 0.43
268 TestNetworkPlugins/group/auto/NetCatPod 14.2
269 TestNetworkPlugins/group/auto/DNS 0.13
270 TestNetworkPlugins/group/auto/Localhost 0.11
271 TestNetworkPlugins/group/auto/HairPin 0.12
272 TestNetworkPlugins/group/custom-flannel/Start 57.28
273 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.44
274 TestNetworkPlugins/group/custom-flannel/NetCatPod 19.23
275 TestNetworkPlugins/group/custom-flannel/DNS 0.13
276 TestNetworkPlugins/group/custom-flannel/Localhost 0.11
277 TestNetworkPlugins/group/custom-flannel/HairPin 0.11
278 TestNetworkPlugins/group/false/Start 49.29
279 TestNetworkPlugins/group/kindnet/Start 52.52
280 TestNetworkPlugins/group/false/KubeletFlags 0.45
281 TestNetworkPlugins/group/false/NetCatPod 24.24
282 TestNetworkPlugins/group/false/DNS 0.14
283 TestNetworkPlugins/group/false/Localhost 0.11
284 TestNetworkPlugins/group/false/HairPin 0.11
285 TestNetworkPlugins/group/kindnet/ControllerPod 5.01
286 TestNetworkPlugins/group/kindnet/KubeletFlags 0.44
287 TestNetworkPlugins/group/kindnet/NetCatPod 19.28
288 TestNetworkPlugins/group/flannel/Start 54.45
289 TestNetworkPlugins/group/kindnet/DNS 0.16
290 TestNetworkPlugins/group/kindnet/Localhost 0.16
291 TestNetworkPlugins/group/kindnet/HairPin 0.15
292 TestNetworkPlugins/group/enable-default-cni/Start 55.29
293 TestNetworkPlugins/group/flannel/ControllerPod 5.02
294 TestNetworkPlugins/group/flannel/KubeletFlags 0.49
295 TestNetworkPlugins/group/flannel/NetCatPod 14.22
296 TestNetworkPlugins/group/flannel/DNS 0.13
297 TestNetworkPlugins/group/flannel/Localhost 0.12
298 TestNetworkPlugins/group/flannel/HairPin 0.11
299 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.46
300 TestNetworkPlugins/group/enable-default-cni/NetCatPod 19.25
301 TestNetworkPlugins/group/bridge/Start 52.43
302 TestNetworkPlugins/group/enable-default-cni/DNS 0.16
303 TestNetworkPlugins/group/enable-default-cni/Localhost 0.16
304 TestNetworkPlugins/group/enable-default-cni/HairPin 0.14
305 TestNetworkPlugins/group/kubenet/Start 52.09
306 TestNetworkPlugins/group/bridge/KubeletFlags 0.42
307 TestNetworkPlugins/group/bridge/NetCatPod 14.19
308 TestNetworkPlugins/group/bridge/DNS 0.13
309 TestNetworkPlugins/group/bridge/Localhost 0.11
310 TestNetworkPlugins/group/bridge/HairPin 0.11
311 TestNetworkPlugins/group/kubenet/KubeletFlags 0.43
312 TestNetworkPlugins/group/kubenet/NetCatPod 13.21
313 TestNetworkPlugins/group/calico/Start 79.31
314 TestNetworkPlugins/group/kubenet/DNS 0.16
315 TestNetworkPlugins/group/kubenet/Localhost 0.16
316 TestNetworkPlugins/group/kubenet/HairPin 0.13
319 TestNetworkPlugins/group/calico/ControllerPod 5.02
320 TestNetworkPlugins/group/calico/KubeletFlags 0.42
321 TestNetworkPlugins/group/calico/NetCatPod 19.21
322 TestNetworkPlugins/group/calico/DNS 0.14
323 TestNetworkPlugins/group/calico/Localhost 0.11
324 TestNetworkPlugins/group/calico/HairPin 0.13
326 TestStartStop/group/no-preload/serial/FirstStart 63.93
327 TestStartStop/group/no-preload/serial/DeployApp 9.27
328 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.95
329 TestStartStop/group/no-preload/serial/Stop 10.86
330 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.4
331 TestStartStop/group/no-preload/serial/SecondStart 303.63
334 TestStartStop/group/old-k8s-version/serial/Stop 1.58
335 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.4
337 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 10.02
338 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.09
339 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.44
340 TestStartStop/group/no-preload/serial/Pause 3.3
342 TestStartStop/group/embed-certs/serial/FirstStart 52.99
343 TestStartStop/group/embed-certs/serial/DeployApp 13.28
344 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.9
345 TestStartStop/group/embed-certs/serial/Stop 10.96
346 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.39
347 TestStartStop/group/embed-certs/serial/SecondStart 305.42
349 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 21.02
350 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.09
351 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.44
352 TestStartStop/group/embed-certs/serial/Pause 3.35
354 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 46.77
355 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.28
356 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.83
357 TestStartStop/group/default-k8s-diff-port/serial/Stop 10.98
358 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.4
359 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 307.69
360 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 8.02
361 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.12
362 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.44
363 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.27
365 TestStartStop/group/newest-cni/serial/FirstStart 44.29
366 TestStartStop/group/newest-cni/serial/DeployApp 0
367 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.91
368 TestStartStop/group/newest-cni/serial/Stop 5.8
369 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.4
370 TestStartStop/group/newest-cni/serial/SecondStart 30.88
371 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
372 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
373 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.46
374 TestStartStop/group/newest-cni/serial/Pause 3.79
x
+
TestDownloadOnly/v1.16.0/json-events (12.77s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-551000 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:71: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-551000 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker : (12.771736701s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (12.77s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
--- PASS: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.31s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-551000
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-551000: exit status 85 (307.750217ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|--------------------------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   |         Version          |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|--------------------------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-551000 | jenkins | v1.29.0-1674856271-15565 | 28 Jan 23 10:39 PST |          |
	|         | -p download-only-551000        |                      |         |                          |                     |          |
	|         | --force --alsologtostderr      |                      |         |                          |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |                          |                     |          |
	|         | --container-runtime=docker     |                      |         |                          |                     |          |
	|         | --driver=docker                |                      |         |                          |                     |          |
	|---------|--------------------------------|----------------------|---------|--------------------------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/01/28 10:39:03
	Running on machine: MacOS-Agent-3
	Binary: Built with gc go1.19.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0128 10:39:03.235369   25984 out.go:296] Setting OutFile to fd 1 ...
	I0128 10:39:03.236032   25984 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0128 10:39:03.236040   25984 out.go:309] Setting ErrFile to fd 2...
	I0128 10:39:03.236046   25984 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0128 10:39:03.236266   25984 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15565-24808/.minikube/bin
	W0128 10:39:03.236615   25984 root.go:311] Error reading config file at /Users/jenkins/minikube-integration/15565-24808/.minikube/config/config.json: open /Users/jenkins/minikube-integration/15565-24808/.minikube/config/config.json: no such file or directory
	I0128 10:39:03.237367   25984 out.go:303] Setting JSON to true
	I0128 10:39:03.255733   25984 start.go:125] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":5918,"bootTime":1674925225,"procs":402,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0128 10:39:03.255821   25984 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0128 10:39:03.278003   25984 out.go:97] [download-only-551000] minikube v1.29.0-1674856271-15565 on Darwin 13.2
	I0128 10:39:03.278230   25984 notify.go:220] Checking for updates...
	I0128 10:39:03.299225   25984 out.go:169] MINIKUBE_LOCATION=15565
	W0128 10:39:03.278241   25984 preload.go:295] Failed to list preload files: open /Users/jenkins/minikube-integration/15565-24808/.minikube/cache/preloaded-tarball: no such file or directory
	I0128 10:39:03.341659   25984 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/15565-24808/kubeconfig
	I0128 10:39:03.363700   25984 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0128 10:39:03.385869   25984 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0128 10:39:03.407899   25984 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-24808/.minikube
	W0128 10:39:03.450664   25984 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0128 10:39:03.451081   25984 driver.go:365] Setting default libvirt URI to qemu:///system
	I0128 10:39:03.512081   25984 docker.go:141] docker version: linux-20.10.22:Docker Desktop 4.16.2 (95914)
	I0128 10:39:03.512198   25984 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0128 10:39:03.652860   25984 info.go:266] docker info: {ID:O4T2:OPZV:SRVN:GAA5:IEOW:TFNR:UPTT:2CKV:GY3E:3NIN:VS7C:3BCB Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:47 OomKillDisable:false NGoroutines:52 SystemTime:2023-01-28 18:39:03.561061332 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0128 10:39:03.674936   25984 out.go:97] Using the docker driver based on user configuration
	I0128 10:39:03.675054   25984 start.go:296] selected driver: docker
	I0128 10:39:03.675072   25984 start.go:857] validating driver "docker" against <nil>
	I0128 10:39:03.675307   25984 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0128 10:39:03.815967   25984 info.go:266] docker info: {ID:O4T2:OPZV:SRVN:GAA5:IEOW:TFNR:UPTT:2CKV:GY3E:3NIN:VS7C:3BCB Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:47 OomKillDisable:false NGoroutines:52 SystemTime:2023-01-28 18:39:03.725275623 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0128 10:39:03.816097   25984 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0128 10:39:03.818479   25984 start_flags.go:386] Using suggested 5895MB memory alloc based on sys=32768MB, container=5943MB
	I0128 10:39:03.818615   25984 start_flags.go:899] Wait components to verify : map[apiserver:true system_pods:true]
	I0128 10:39:03.840492   25984 out.go:169] Using Docker Desktop driver with root privileges
	I0128 10:39:03.862189   25984 cni.go:84] Creating CNI manager for ""
	I0128 10:39:03.862230   25984 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0128 10:39:03.862248   25984 start_flags.go:319] config:
	{Name:download-only-551000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 Memory:5895 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-551000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket:
NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0128 10:39:03.884265   25984 out.go:97] Starting control plane node download-only-551000 in cluster download-only-551000
	I0128 10:39:03.884397   25984 cache.go:120] Beginning downloading kic base image for docker with docker
	I0128 10:39:03.906107   25984 out.go:97] Pulling base image ...
	I0128 10:39:03.906210   25984 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0128 10:39:03.906291   25984 image.go:77] Checking for gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 in local docker daemon
	I0128 10:39:03.960550   25984 cache.go:148] Downloading gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 to local cache
	I0128 10:39:03.960824   25984 image.go:61] Checking for gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 in local cache directory
	I0128 10:39:03.960953   25984 image.go:119] Writing gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 to local cache
	I0128 10:39:03.964085   25984 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0128 10:39:03.964100   25984 cache.go:57] Caching tarball of preloaded images
	I0128 10:39:03.964252   25984 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0128 10:39:03.985058   25984 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0128 10:39:03.985080   25984 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0128 10:39:04.065850   25984 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4?checksum=md5:326f3ce331abb64565b50b8c9e791244 -> /Users/jenkins/minikube-integration/15565-24808/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0128 10:39:08.471750   25984 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0128 10:39:08.471874   25984 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/15565-24808/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0128 10:39:09.018722   25984 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0128 10:39:09.018922   25984 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/download-only-551000/config.json ...
	I0128 10:39:09.018946   25984 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/download-only-551000/config.json: {Name:mka0b32a2896547f4debfd4cf1db5ad3b1d15439 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0128 10:39:09.019193   25984 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0128 10:39:09.019448   25984 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/darwin/amd64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/darwin/amd64/kubectl.sha1 -> /Users/jenkins/minikube-integration/15565-24808/.minikube/cache/darwin/amd64/v1.16.0/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-551000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.31s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.1/json-events (7.64s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.1/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-551000 --force --alsologtostderr --kubernetes-version=v1.26.1 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:71: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-551000 --force --alsologtostderr --kubernetes-version=v1.26.1 --container-runtime=docker --driver=docker : (7.639000176s)
--- PASS: TestDownloadOnly/v1.26.1/json-events (7.64s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.1/preload-exists
--- PASS: TestDownloadOnly/v1.26.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.1/kubectl
--- PASS: TestDownloadOnly/v1.26.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.1/LogsDuration (0.3s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.1/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-551000
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-551000: exit status 85 (298.061329ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|--------------------------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   |         Version          |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|--------------------------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-551000 | jenkins | v1.29.0-1674856271-15565 | 28 Jan 23 10:39 PST |          |
	|         | -p download-only-551000        |                      |         |                          |                     |          |
	|         | --force --alsologtostderr      |                      |         |                          |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |                          |                     |          |
	|         | --container-runtime=docker     |                      |         |                          |                     |          |
	|         | --driver=docker                |                      |         |                          |                     |          |
	| start   | -o=json --download-only        | download-only-551000 | jenkins | v1.29.0-1674856271-15565 | 28 Jan 23 10:39 PST |          |
	|         | -p download-only-551000        |                      |         |                          |                     |          |
	|         | --force --alsologtostderr      |                      |         |                          |                     |          |
	|         | --kubernetes-version=v1.26.1   |                      |         |                          |                     |          |
	|         | --container-runtime=docker     |                      |         |                          |                     |          |
	|         | --driver=docker                |                      |         |                          |                     |          |
	|---------|--------------------------------|----------------------|---------|--------------------------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/01/28 10:39:16
	Running on machine: MacOS-Agent-3
	Binary: Built with gc go1.19.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0128 10:39:16.319926   26027 out.go:296] Setting OutFile to fd 1 ...
	I0128 10:39:16.320163   26027 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0128 10:39:16.320169   26027 out.go:309] Setting ErrFile to fd 2...
	I0128 10:39:16.320172   26027 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0128 10:39:16.320277   26027 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15565-24808/.minikube/bin
	W0128 10:39:16.320373   26027 root.go:311] Error reading config file at /Users/jenkins/minikube-integration/15565-24808/.minikube/config/config.json: open /Users/jenkins/minikube-integration/15565-24808/.minikube/config/config.json: no such file or directory
	I0128 10:39:16.320718   26027 out.go:303] Setting JSON to true
	I0128 10:39:16.339485   26027 start.go:125] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":5931,"bootTime":1674925225,"procs":397,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0128 10:39:16.339634   26027 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0128 10:39:16.361311   26027 out.go:97] [download-only-551000] minikube v1.29.0-1674856271-15565 on Darwin 13.2
	I0128 10:39:16.361550   26027 notify.go:220] Checking for updates...
	I0128 10:39:16.383025   26027 out.go:169] MINIKUBE_LOCATION=15565
	I0128 10:39:16.404116   26027 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/15565-24808/kubeconfig
	I0128 10:39:16.426244   26027 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0128 10:39:16.448155   26027 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0128 10:39:16.469956   26027 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-24808/.minikube
	W0128 10:39:16.511927   26027 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0128 10:39:16.512626   26027 config.go:180] Loaded profile config "download-only-551000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	W0128 10:39:16.512706   26027 start.go:765] api.Load failed for download-only-551000: filestore "download-only-551000": Docker machine "download-only-551000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0128 10:39:16.512785   26027 driver.go:365] Setting default libvirt URI to qemu:///system
	W0128 10:39:16.512819   26027 start.go:765] api.Load failed for download-only-551000: filestore "download-only-551000": Docker machine "download-only-551000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0128 10:39:16.571432   26027 docker.go:141] docker version: linux-20.10.22:Docker Desktop 4.16.2 (95914)
	I0128 10:39:16.571543   26027 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0128 10:39:16.710955   26027 info.go:266] docker info: {ID:O4T2:OPZV:SRVN:GAA5:IEOW:TFNR:UPTT:2CKV:GY3E:3NIN:VS7C:3BCB Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:47 OomKillDisable:false NGoroutines:52 SystemTime:2023-01-28 18:39:16.619684164 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0128 10:39:16.732571   26027 out.go:97] Using the docker driver based on existing profile
	I0128 10:39:16.732673   26027 start.go:296] selected driver: docker
	I0128 10:39:16.732686   26027 start.go:857] validating driver "docker" against &{Name:download-only-551000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 Memory:5895 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-551000 Namespace:default APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0128 10:39:16.732979   26027 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0128 10:39:16.874595   26027 info.go:266] docker info: {ID:O4T2:OPZV:SRVN:GAA5:IEOW:TFNR:UPTT:2CKV:GY3E:3NIN:VS7C:3BCB Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:47 OomKillDisable:false NGoroutines:52 SystemTime:2023-01-28 18:39:16.783184764 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0128 10:39:16.877297   26027 cni.go:84] Creating CNI manager for ""
	I0128 10:39:16.877325   26027 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0128 10:39:16.877341   26027 start_flags.go:319] config:
	{Name:download-only-551000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 Memory:5895 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:download-only-551000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket:
NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0128 10:39:16.899153   26027 out.go:97] Starting control plane node download-only-551000 in cluster download-only-551000
	I0128 10:39:16.899235   26027 cache.go:120] Beginning downloading kic base image for docker with docker
	I0128 10:39:16.920887   26027 out.go:97] Pulling base image ...
	I0128 10:39:16.921001   26027 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0128 10:39:16.921105   26027 image.go:77] Checking for gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 in local docker daemon
	I0128 10:39:16.976643   26027 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.26.1/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4
	I0128 10:39:16.976665   26027 cache.go:57] Caching tarball of preloaded images
	I0128 10:39:16.976905   26027 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0128 10:39:16.977130   26027 cache.go:148] Downloading gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 to local cache
	I0128 10:39:16.977211   26027 image.go:61] Checking for gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 in local cache directory
	I0128 10:39:16.977229   26027 image.go:64] Found gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 in local cache directory, skipping pull
	I0128 10:39:16.977234   26027 image.go:103] gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 exists in cache, skipping pull
	I0128 10:39:16.977247   26027 cache.go:151] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 as a tarball
	I0128 10:39:16.997784   26027 out.go:97] Downloading Kubernetes v1.26.1 preload ...
	I0128 10:39:16.997878   26027 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 ...
	I0128 10:39:17.079479   26027 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.26.1/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4?checksum=md5:c6cc8ea1da4e19500d6fe35540785ea8 -> /Users/jenkins/minikube-integration/15565-24808/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-551000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.26.1/LogsDuration (0.30s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.67s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.67s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.4s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:203: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-551000
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.40s)

                                                
                                    
x
+
TestDownloadOnlyKic (12.34s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p download-docker-822000 --force --alsologtostderr --driver=docker 
aaa_download_only_test.go:228: (dbg) Done: out/minikube-darwin-amd64 start --download-only -p download-docker-822000 --force --alsologtostderr --driver=docker : (11.233658273s)
helpers_test.go:175: Cleaning up "download-docker-822000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-docker-822000
--- PASS: TestDownloadOnlyKic (12.34s)

                                                
                                    
x
+
TestBinaryMirror (1.76s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:310: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p binary-mirror-309000 --alsologtostderr --binary-mirror http://127.0.0.1:57146 --driver=docker 
aaa_download_only_test.go:310: (dbg) Done: out/minikube-darwin-amd64 start --download-only -p binary-mirror-309000 --alsologtostderr --binary-mirror http://127.0.0.1:57146 --driver=docker : (1.130741303s)
helpers_test.go:175: Cleaning up "binary-mirror-309000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p binary-mirror-309000
--- PASS: TestBinaryMirror (1.76s)

                                                
                                    
x
+
TestOffline (53.59s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 start -p offline-docker-015000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker 

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Done: out/minikube-darwin-amd64 start -p offline-docker-015000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker : (50.941188293s)
helpers_test.go:175: Cleaning up "offline-docker-015000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p offline-docker-015000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p offline-docker-015000: (2.650703861s)
--- PASS: TestOffline (53.59s)

                                                
                                    
x
+
TestAddons/Setup (143.68s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:88: (dbg) Run:  out/minikube-darwin-amd64 start -p addons-582000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --driver=docker  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:88: (dbg) Done: out/minikube-darwin-amd64 start -p addons-582000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --driver=docker  --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m23.684435864s)
--- PASS: TestAddons/Setup (143.68s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.84s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:372: metrics-server stabilized in 2.202273ms
addons_test.go:374: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-5f8fcc9bb7-g874j" [d644ff2c-f090-4072-a737-5bc9f8afda1c] Running
addons_test.go:374: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.027876256s
addons_test.go:380: (dbg) Run:  kubectl --context addons-582000 top pods -n kube-system
addons_test.go:397: (dbg) Run:  out/minikube-darwin-amd64 -p addons-582000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.84s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (17.16s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:421: tiller-deploy stabilized in 2.59387ms
addons_test.go:423: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-54cb789455-wbc86" [c3490f65-7733-4d49-9201-0ab2f77c85a0] Running

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:423: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.009957178s
addons_test.go:438: (dbg) Run:  kubectl --context addons-582000 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:438: (dbg) Done: kubectl --context addons-582000 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version: (11.589882963s)
addons_test.go:455: (dbg) Run:  out/minikube-darwin-amd64 -p addons-582000 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (17.16s)

                                                
                                    
x
+
TestAddons/parallel/CSI (42.59s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:526: csi-hostpath-driver pods stabilized in 8.031465ms
addons_test.go:529: (dbg) Run:  kubectl --context addons-582000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:534: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-582000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:539: (dbg) Run:  kubectl --context addons-582000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:544: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [f29f7575-5da9-48eb-b0a6-695622660926] Pending
helpers_test.go:344: "task-pv-pod" [f29f7575-5da9-48eb-b0a6-695622660926] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:344: "task-pv-pod" [f29f7575-5da9-48eb-b0a6-695622660926] Running
addons_test.go:544: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 18.00733792s
addons_test.go:549: (dbg) Run:  kubectl --context addons-582000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:554: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-582000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-582000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:559: (dbg) Run:  kubectl --context addons-582000 delete pod task-pv-pod
addons_test.go:559: (dbg) Done: kubectl --context addons-582000 delete pod task-pv-pod: (1.001058039s)
addons_test.go:565: (dbg) Run:  kubectl --context addons-582000 delete pvc hpvc
addons_test.go:571: (dbg) Run:  kubectl --context addons-582000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:576: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-582000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:581: (dbg) Run:  kubectl --context addons-582000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:586: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [58a222f9-e7b2-4779-aa83-e2e8b12aeec4] Pending
helpers_test.go:344: "task-pv-pod-restore" [58a222f9-e7b2-4779-aa83-e2e8b12aeec4] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [58a222f9-e7b2-4779-aa83-e2e8b12aeec4] Running
addons_test.go:586: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 13.00978109s
addons_test.go:591: (dbg) Run:  kubectl --context addons-582000 delete pod task-pv-pod-restore
addons_test.go:595: (dbg) Run:  kubectl --context addons-582000 delete pvc hpvc-restore
addons_test.go:599: (dbg) Run:  kubectl --context addons-582000 delete volumesnapshot new-snapshot-demo
addons_test.go:603: (dbg) Run:  out/minikube-darwin-amd64 -p addons-582000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:603: (dbg) Done: out/minikube-darwin-amd64 -p addons-582000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.956474737s)
addons_test.go:607: (dbg) Run:  out/minikube-darwin-amd64 -p addons-582000 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (42.59s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (12.59s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:789: (dbg) Run:  out/minikube-darwin-amd64 addons enable headlamp -p addons-582000 --alsologtostderr -v=1
addons_test.go:789: (dbg) Done: out/minikube-darwin-amd64 addons enable headlamp -p addons-582000 --alsologtostderr -v=1: (1.576951126s)
addons_test.go:794: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-5759877c79-xrtq5" [c1fb7ead-476d-431f-a2cb-09c402e318e2] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
helpers_test.go:344: "headlamp-5759877c79-xrtq5" [c1fb7ead-476d-431f-a2cb-09c402e318e2] Running

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:794: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.010330306s
--- PASS: TestAddons/parallel/Headlamp (12.59s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.45s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:810: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-769b7f8b64-pjxh6" [f6ef0774-29d3-4223-b910-acfd672a6658] Running

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:810: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.009293878s
addons_test.go:813: (dbg) Run:  out/minikube-darwin-amd64 addons disable cloud-spanner -p addons-582000
--- PASS: TestAddons/parallel/CloudSpanner (5.45s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (2.44s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:615: (dbg) Run:  kubectl --context addons-582000 create ns new-namespace
addons_test.go:629: (dbg) Run:  kubectl --context addons-582000 get secret gcp-auth -n new-namespace
addons_test.go:629: (dbg) Non-zero exit: kubectl --context addons-582000 get secret gcp-auth -n new-namespace: exit status 1 (55.121667ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): secrets "gcp-auth" not found

                                                
                                                
** /stderr **
addons_test.go:621: (dbg) Run:  kubectl --context addons-582000 logs -l app=gcp-auth -n gcp-auth
addons_test.go:629: (dbg) Run:  kubectl --context addons-582000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (2.44s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (11.53s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 stop -p addons-582000
addons_test.go:147: (dbg) Done: out/minikube-darwin-amd64 stop -p addons-582000: (11.077706435s)
addons_test.go:151: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-582000
addons_test.go:155: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-582000
--- PASS: TestAddons/StoppedEnableDisable (11.53s)

                                                
                                    
x
+
TestCertOptions (44.82s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-options-338000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --apiserver-name=localhost

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Done: out/minikube-darwin-amd64 start -p cert-options-338000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --apiserver-name=localhost: (41.225115662s)
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-amd64 -p cert-options-338000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-amd64 ssh -p cert-options-338000 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-338000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-options-338000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-options-338000: (2.706454571s)
--- PASS: TestCertOptions (44.82s)

                                                
                                    
x
+
TestCertExpiration (249.59s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-294000 --memory=2048 --cert-expiration=3m --driver=docker 

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Done: out/minikube-darwin-amd64 start -p cert-expiration-294000 --memory=2048 --cert-expiration=3m --driver=docker : (35.039072476s)

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-294000 --memory=2048 --cert-expiration=8760h --driver=docker 
E0128 11:19:14.566997   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/skaffold-054000/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Done: out/minikube-darwin-amd64 start -p cert-expiration-294000 --memory=2048 --cert-expiration=8760h --driver=docker : (31.950257901s)
helpers_test.go:175: Cleaning up "cert-expiration-294000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-expiration-294000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-expiration-294000: (2.600258657s)
--- PASS: TestCertExpiration (249.59s)

                                                
                                    
x
+
TestDockerFlags (34.88s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:45: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-flags-395000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker 

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:45: (dbg) Done: out/minikube-darwin-amd64 start -p docker-flags-395000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker : (31.207942812s)
docker_test.go:50: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-395000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:61: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-395000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-395000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-flags-395000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-flags-395000: (2.7584621s)
--- PASS: TestDockerFlags (34.88s)

                                                
                                    
x
+
TestForceSystemdFlag (37.93s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-flag-216000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker 

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Done: out/minikube-darwin-amd64 start -p force-systemd-flag-216000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker : (34.693387687s)
docker_test.go:104: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-flag-216000 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-216000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-flag-216000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-flag-216000: (2.720491953s)
--- PASS: TestForceSystemdFlag (37.93s)

                                                
                                    
x
+
TestForceSystemdEnv (37.27s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:149: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-env-884000 --memory=2048 --alsologtostderr -v=5 --driver=docker 

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:149: (dbg) Done: out/minikube-darwin-amd64 start -p force-systemd-env-884000 --memory=2048 --alsologtostderr -v=5 --driver=docker : (33.990660676s)
docker_test.go:104: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-env-884000 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-884000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-env-884000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-env-884000: (2.72857024s)
--- PASS: TestForceSystemdEnv (37.27s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (8.87s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (8.87s)

                                                
                                    
x
+
TestErrorSpam/start (2.48s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-254000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-254000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-254000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-254000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-254000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-254000 start --dry-run
--- PASS: TestErrorSpam/start (2.48s)

                                                
                                    
x
+
TestErrorSpam/status (1.27s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-254000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-254000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-254000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-254000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-254000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-254000 status
--- PASS: TestErrorSpam/status (1.27s)

                                                
                                    
x
+
TestErrorSpam/pause (1.8s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-254000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-254000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-254000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-254000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-254000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-254000 pause
--- PASS: TestErrorSpam/pause (1.80s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.9s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-254000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-254000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-254000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-254000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-254000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-254000 unpause
--- PASS: TestErrorSpam/unpause (1.90s)

                                                
                                    
x
+
TestErrorSpam/stop (2.83s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-254000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-254000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-amd64 -p nospam-254000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-254000 stop: (2.181700104s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-254000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-254000 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-254000 --log_dir /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/nospam-254000 stop
--- PASS: TestErrorSpam/stop (2.83s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1782: local sync path: /Users/jenkins/minikube-integration/15565-24808/.minikube/files/etc/test/nested/copy/25982/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (46.24s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2161: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-251000 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker 
functional_test.go:2161: (dbg) Done: out/minikube-darwin-amd64 start -p functional-251000 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker : (46.242709396s)
--- PASS: TestFunctional/serial/StartWithProxy (46.24s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (40.96s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:652: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-251000 --alsologtostderr -v=8
functional_test.go:652: (dbg) Done: out/minikube-darwin-amd64 start -p functional-251000 --alsologtostderr -v=8: (40.960402265s)
functional_test.go:656: soft start took 40.960967869s for "functional-251000" cluster.
--- PASS: TestFunctional/serial/SoftStart (40.96s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:674: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:689: (dbg) Run:  kubectl --context functional-251000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (6.99s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1042: (dbg) Run:  out/minikube-darwin-amd64 -p functional-251000 cache add k8s.gcr.io/pause:3.1
functional_test.go:1042: (dbg) Done: out/minikube-darwin-amd64 -p functional-251000 cache add k8s.gcr.io/pause:3.1: (2.393609374s)
functional_test.go:1042: (dbg) Run:  out/minikube-darwin-amd64 -p functional-251000 cache add k8s.gcr.io/pause:3.3
functional_test.go:1042: (dbg) Done: out/minikube-darwin-amd64 -p functional-251000 cache add k8s.gcr.io/pause:3.3: (2.356536347s)
functional_test.go:1042: (dbg) Run:  out/minikube-darwin-amd64 -p functional-251000 cache add k8s.gcr.io/pause:latest
functional_test.go:1042: (dbg) Done: out/minikube-darwin-amd64 -p functional-251000 cache add k8s.gcr.io/pause:latest: (2.235730852s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (6.99s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.68s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1070: (dbg) Run:  docker build -t minikube-local-cache-test:functional-251000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalserialCacheCmdcacheadd_local3882875422/001
functional_test.go:1082: (dbg) Run:  out/minikube-darwin-amd64 -p functional-251000 cache add minikube-local-cache-test:functional-251000
functional_test.go:1082: (dbg) Done: out/minikube-darwin-amd64 -p functional-251000 cache add minikube-local-cache-test:functional-251000: (1.131714268s)
functional_test.go:1087: (dbg) Run:  out/minikube-darwin-amd64 -p functional-251000 cache delete minikube-local-cache-test:functional-251000
functional_test.go:1076: (dbg) Run:  docker rmi minikube-local-cache-test:functional-251000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.68s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3
functional_test.go:1095: (dbg) Run:  out/minikube-darwin-amd64 cache delete k8s.gcr.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1103: (dbg) Run:  out/minikube-darwin-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.43s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1117: (dbg) Run:  out/minikube-darwin-amd64 -p functional-251000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.43s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.81s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1140: (dbg) Run:  out/minikube-darwin-amd64 -p functional-251000 ssh sudo docker rmi k8s.gcr.io/pause:latest
functional_test.go:1146: (dbg) Run:  out/minikube-darwin-amd64 -p functional-251000 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
functional_test.go:1146: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-251000 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: exit status 1 (402.129141ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "k8s.gcr.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1151: (dbg) Run:  out/minikube-darwin-amd64 -p functional-251000 cache reload
functional_test.go:1151: (dbg) Done: out/minikube-darwin-amd64 -p functional-251000 cache reload: (1.534650694s)
functional_test.go:1156: (dbg) Run:  out/minikube-darwin-amd64 -p functional-251000 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.81s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.17s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1165: (dbg) Run:  out/minikube-darwin-amd64 cache delete k8s.gcr.io/pause:3.1
functional_test.go:1165: (dbg) Run:  out/minikube-darwin-amd64 cache delete k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.17s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.55s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:709: (dbg) Run:  out/minikube-darwin-amd64 -p functional-251000 kubectl -- --context functional-251000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.55s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.69s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:734: (dbg) Run:  out/kubectl --context functional-251000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.69s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (43.86s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:750: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-251000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:750: (dbg) Done: out/minikube-darwin-amd64 start -p functional-251000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (43.857829548s)
functional_test.go:754: restart took 43.857988266s for "functional-251000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (43.86s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:803: (dbg) Run:  kubectl --context functional-251000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:818: etcd phase: Running
functional_test.go:828: etcd status: Ready
functional_test.go:818: kube-apiserver phase: Running
functional_test.go:828: kube-apiserver status: Ready
functional_test.go:818: kube-controller-manager phase: Running
functional_test.go:828: kube-controller-manager status: Ready
functional_test.go:818: kube-scheduler phase: Running
functional_test.go:828: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.05s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (3.13s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1229: (dbg) Run:  out/minikube-darwin-amd64 -p functional-251000 logs
functional_test.go:1229: (dbg) Done: out/minikube-darwin-amd64 -p functional-251000 logs: (3.129509224s)
--- PASS: TestFunctional/serial/LogsCmd (3.13s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (3.18s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-251000 logs --file /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalserialLogsFileCmd139576039/001/logs.txt
E0128 10:47:03.441725   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/addons-582000/client.crt: no such file or directory
E0128 10:47:03.528690   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/addons-582000/client.crt: no such file or directory
E0128 10:47:03.539886   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/addons-582000/client.crt: no such file or directory
E0128 10:47:03.560889   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/addons-582000/client.crt: no such file or directory
E0128 10:47:03.601040   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/addons-582000/client.crt: no such file or directory
E0128 10:47:03.681418   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/addons-582000/client.crt: no such file or directory
E0128 10:47:03.843605   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/addons-582000/client.crt: no such file or directory
E0128 10:47:04.163757   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/addons-582000/client.crt: no such file or directory
functional_test.go:1243: (dbg) Done: out/minikube-darwin-amd64 -p functional-251000 logs --file /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalserialLogsFileCmd139576039/001/logs.txt: (3.174173365s)
--- PASS: TestFunctional/serial/LogsFileCmd (3.18s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1192: (dbg) Run:  out/minikube-darwin-amd64 -p functional-251000 config unset cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1192: (dbg) Run:  out/minikube-darwin-amd64 -p functional-251000 config get cpus
E0128 10:47:04.804271   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/addons-582000/client.crt: no such file or directory
functional_test.go:1192: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-251000 config get cpus: exit status 14 (57.091255ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1192: (dbg) Run:  out/minikube-darwin-amd64 -p functional-251000 config set cpus 2
functional_test.go:1192: (dbg) Run:  out/minikube-darwin-amd64 -p functional-251000 config get cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1192: (dbg) Run:  out/minikube-darwin-amd64 -p functional-251000 config unset cpus
functional_test.go:1192: (dbg) Run:  out/minikube-darwin-amd64 -p functional-251000 config get cpus
functional_test.go:1192: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-251000 config get cpus: exit status 14 (61.197578ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (8.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:898: (dbg) daemon: [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-251000 --alsologtostderr -v=1]

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:903: (dbg) stopping [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-251000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 28443: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (8.83s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (1.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:967: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-251000 --dry-run --memory 250MB --alsologtostderr --driver=docker 

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:967: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-251000 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (908.006235ms)

                                                
                                                
-- stdout --
	* [functional-251000] minikube v1.29.0-1674856271-15565 on Darwin 13.2
	  - MINIKUBE_LOCATION=15565
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15565-24808/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-24808/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0128 10:48:01.032163   28390 out.go:296] Setting OutFile to fd 1 ...
	I0128 10:48:01.032465   28390 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0128 10:48:01.032473   28390 out.go:309] Setting ErrFile to fd 2...
	I0128 10:48:01.032487   28390 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0128 10:48:01.032714   28390 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15565-24808/.minikube/bin
	I0128 10:48:01.069551   28390 out.go:303] Setting JSON to false
	I0128 10:48:01.091026   28390 start.go:125] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":6456,"bootTime":1674925225,"procs":387,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0128 10:48:01.091116   28390 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0128 10:48:01.115955   28390 out.go:177] * [functional-251000] minikube v1.29.0-1674856271-15565 on Darwin 13.2
	I0128 10:48:01.174189   28390 notify.go:220] Checking for updates...
	I0128 10:48:01.211969   28390 out.go:177]   - MINIKUBE_LOCATION=15565
	I0128 10:48:01.285886   28390 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15565-24808/kubeconfig
	I0128 10:48:01.306957   28390 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0128 10:48:01.364810   28390 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0128 10:48:01.439008   28390 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-24808/.minikube
	I0128 10:48:01.496919   28390 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0128 10:48:01.518122   28390 config.go:180] Loaded profile config "functional-251000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0128 10:48:01.518477   28390 driver.go:365] Setting default libvirt URI to qemu:///system
	I0128 10:48:01.582940   28390 docker.go:141] docker version: linux-20.10.22:Docker Desktop 4.16.2 (95914)
	I0128 10:48:01.583091   28390 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0128 10:48:01.729683   28390 info.go:266] docker info: {ID:O4T2:OPZV:SRVN:GAA5:IEOW:TFNR:UPTT:2CKV:GY3E:3NIN:VS7C:3BCB Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:61 OomKillDisable:false NGoroutines:57 SystemTime:2023-01-28 18:48:01.637009295 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0128 10:48:01.753482   28390 out.go:177] * Using the docker driver based on existing profile
	I0128 10:48:01.775077   28390 start.go:296] selected driver: docker
	I0128 10:48:01.775091   28390 start.go:857] validating driver "docker" against &{Name:functional-251000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:functional-251000 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mo
untMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0128 10:48:01.775186   28390 start.go:868] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0128 10:48:01.799279   28390 out.go:177] 
	W0128 10:48:01.820360   28390 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0128 10:48:01.841036   28390 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:984: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-251000 --dry-run --alsologtostderr -v=1 --driver=docker 
--- PASS: TestFunctional/parallel/DryRun (1.70s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1013: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-251000 --dry-run --memory 250MB --alsologtostderr --driver=docker 

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1013: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-251000 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (738.603515ms)

                                                
                                                
-- stdout --
	* [functional-251000] minikube v1.29.0-1674856271-15565 sur Darwin 13.2
	  - MINIKUBE_LOCATION=15565
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15565-24808/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-24808/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0128 10:48:00.269612   28375 out.go:296] Setting OutFile to fd 1 ...
	I0128 10:48:00.269773   28375 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0128 10:48:00.269779   28375 out.go:309] Setting ErrFile to fd 2...
	I0128 10:48:00.269783   28375 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0128 10:48:00.269915   28375 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15565-24808/.minikube/bin
	I0128 10:48:00.270376   28375 out.go:303] Setting JSON to false
	I0128 10:48:00.290862   28375 start.go:125] hostinfo: {"hostname":"MacOS-Agent-3.local","uptime":6455,"bootTime":1674925225,"procs":388,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"bd1c05a8-24a6-5973-aa69-f3c7c66a87ce"}
	W0128 10:48:00.290958   28375 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0128 10:48:00.313792   28375 out.go:177] * [functional-251000] minikube v1.29.0-1674856271-15565 sur Darwin 13.2
	I0128 10:48:00.335736   28375 notify.go:220] Checking for updates...
	I0128 10:48:00.356578   28375 out.go:177]   - MINIKUBE_LOCATION=15565
	I0128 10:48:00.399441   28375 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15565-24808/kubeconfig
	I0128 10:48:00.441546   28375 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0128 10:48:00.483499   28375 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0128 10:48:00.525487   28375 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-24808/.minikube
	I0128 10:48:00.546563   28375 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0128 10:48:00.567787   28375 config.go:180] Loaded profile config "functional-251000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0128 10:48:00.568136   28375 driver.go:365] Setting default libvirt URI to qemu:///system
	I0128 10:48:00.637833   28375 docker.go:141] docker version: linux-20.10.22:Docker Desktop 4.16.2 (95914)
	I0128 10:48:00.637979   28375 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0128 10:48:00.781250   28375 info.go:266] docker info: {ID:O4T2:OPZV:SRVN:GAA5:IEOW:TFNR:UPTT:2CKV:GY3E:3NIN:VS7C:3BCB Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:61 OomKillDisable:false NGoroutines:57 SystemTime:2023-01-28 18:48:00.688160132 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0128 10:48:00.823938   28375 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0128 10:48:00.844869   28375 start.go:296] selected driver: docker
	I0128 10:48:00.844893   28375 start.go:857] validating driver "docker" against &{Name:functional-251000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.37@sha256:8bf7a0e8a062bc5e2b71d28b35bfa9cc862d9220e234e86176b3785f685d8b15 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:functional-251000 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mo
untMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0128 10:48:00.845040   28375 start.go:868] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0128 10:48:00.869695   28375 out.go:177] 
	W0128 10:48:00.891065   28375 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0128 10:48:00.911955   28375 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:847: (dbg) Run:  out/minikube-darwin-amd64 -p functional-251000 status

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:853: (dbg) Run:  out/minikube-darwin-amd64 -p functional-251000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:865: (dbg) Run:  out/minikube-darwin-amd64 -p functional-251000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.76s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd (14.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd
=== PAUSE TestFunctional/parallel/ServiceCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1433: (dbg) Run:  kubectl --context functional-251000 create deployment hello-node --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1439: (dbg) Run:  kubectl --context functional-251000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1444: (dbg) TestFunctional/parallel/ServiceCmd: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6fddd6858d-r65fz" [760d6430-6f8f-4b37-95ee-1b2d7a32f4cb] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6fddd6858d-r65fz" [760d6430-6f8f-4b37-95ee-1b2d7a32f4cb] Running
E0128 10:47:44.489003   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/addons-582000/client.crt: no such file or directory

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1444: (dbg) TestFunctional/parallel/ServiceCmd: app=hello-node healthy within 7.007173159s
functional_test.go:1449: (dbg) Run:  out/minikube-darwin-amd64 -p functional-251000 service list

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1449: (dbg) Done: out/minikube-darwin-amd64 -p functional-251000 service list: (1.056245264s)
functional_test.go:1463: (dbg) Run:  out/minikube-darwin-amd64 -p functional-251000 service --namespace=default --https --url hello-node

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1463: (dbg) Done: out/minikube-darwin-amd64 -p functional-251000 service --namespace=default --https --url hello-node: (2.026276724s)
functional_test.go:1476: found endpoint: https://127.0.0.1:58044
functional_test.go:1491: (dbg) Run:  out/minikube-darwin-amd64 -p functional-251000 service hello-node --url --format={{.IP}}
functional_test.go:1491: (dbg) Done: out/minikube-darwin-amd64 -p functional-251000 service hello-node --url --format={{.IP}}: (2.02580334s)
functional_test.go:1505: (dbg) Run:  out/minikube-darwin-amd64 -p functional-251000 service hello-node --url
functional_test.go:1505: (dbg) Done: out/minikube-darwin-amd64 -p functional-251000 service hello-node --url: (2.027767265s)
functional_test.go:1511: found endpoint for hello-node: http://127.0.0.1:58058
--- PASS: TestFunctional/parallel/ServiceCmd (14.26s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1620: (dbg) Run:  out/minikube-darwin-amd64 -p functional-251000 addons list
functional_test.go:1632: (dbg) Run:  out/minikube-darwin-amd64 -p functional-251000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (28.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [e4fb3fde-75eb-471e-83f2-9bef0cb1433f] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.009445224s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-251000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-251000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-251000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-251000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [08896e2a-41ac-417b-baae-69f4a39c03b3] Pending
helpers_test.go:344: "sp-pod" [08896e2a-41ac-417b-baae-69f4a39c03b3] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:344: "sp-pod" [08896e2a-41ac-417b-baae-69f4a39c03b3] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 14.009632219s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-251000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-251000 delete -f testdata/storage-provisioner/pod.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-251000 delete -f testdata/storage-provisioner/pod.yaml: (1.212589232s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-251000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [8338da28-a1e6-4eac-ae0e-223c26c0765e] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:344: "sp-pod" [8338da28-a1e6-4eac-ae0e-223c26c0765e] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [8338da28-a1e6-4eac-ae0e-223c26c0765e] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.008218622s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-251000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (28.95s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (1.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1655: (dbg) Run:  out/minikube-darwin-amd64 -p functional-251000 ssh "echo hello"
functional_test.go:1672: (dbg) Run:  out/minikube-darwin-amd64 -p functional-251000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (1.14s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-251000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-251000 ssh -n functional-251000 "sudo cat /home/docker/cp-test.txt"

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-251000 cp functional-251000:/home/docker/cp-test.txt /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelCpCmd4025780437/001/cp-test.txt
E0128 10:47:08.646099   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/addons-582000/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-251000 ssh -n functional-251000 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.64s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (25.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1720: (dbg) Run:  kubectl --context functional-251000 replace --force -f testdata/mysql.yaml
functional_test.go:1726: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:344: "mysql-888f84dd9-zrvwc" [23126033-3914-4566-99ab-e49150ba06d1] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:344: "mysql-888f84dd9-zrvwc" [23126033-3914-4566-99ab-e49150ba06d1] Running

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1726: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 19.012975274s
functional_test.go:1734: (dbg) Run:  kubectl --context functional-251000 exec mysql-888f84dd9-zrvwc -- mysql -ppassword -e "show databases;"
functional_test.go:1734: (dbg) Non-zero exit: kubectl --context functional-251000 exec mysql-888f84dd9-zrvwc -- mysql -ppassword -e "show databases;": exit status 1 (117.352559ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1734: (dbg) Run:  kubectl --context functional-251000 exec mysql-888f84dd9-zrvwc -- mysql -ppassword -e "show databases;"
functional_test.go:1734: (dbg) Non-zero exit: kubectl --context functional-251000 exec mysql-888f84dd9-zrvwc -- mysql -ppassword -e "show databases;": exit status 1 (124.592754ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1734: (dbg) Run:  kubectl --context functional-251000 exec mysql-888f84dd9-zrvwc -- mysql -ppassword -e "show databases;"

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1734: (dbg) Non-zero exit: kubectl --context functional-251000 exec mysql-888f84dd9-zrvwc -- mysql -ppassword -e "show databases;": exit status 1 (108.283964ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1734: (dbg) Run:  kubectl --context functional-251000 exec mysql-888f84dd9-zrvwc -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (25.05s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1856: Checking for existence of /etc/test/nested/copy/25982/hosts within VM
functional_test.go:1858: (dbg) Run:  out/minikube-darwin-amd64 -p functional-251000 ssh "sudo cat /etc/test/nested/copy/25982/hosts"

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1863: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1899: Checking for existence of /etc/ssl/certs/25982.pem within VM
functional_test.go:1900: (dbg) Run:  out/minikube-darwin-amd64 -p functional-251000 ssh "sudo cat /etc/ssl/certs/25982.pem"
functional_test.go:1899: Checking for existence of /usr/share/ca-certificates/25982.pem within VM
functional_test.go:1900: (dbg) Run:  out/minikube-darwin-amd64 -p functional-251000 ssh "sudo cat /usr/share/ca-certificates/25982.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1899: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1900: (dbg) Run:  out/minikube-darwin-amd64 -p functional-251000 ssh "sudo cat /etc/ssl/certs/51391683.0"
E0128 10:47:06.085917   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/addons-582000/client.crt: no such file or directory
functional_test.go:1926: Checking for existence of /etc/ssl/certs/259822.pem within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-amd64 -p functional-251000 ssh "sudo cat /etc/ssl/certs/259822.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1926: Checking for existence of /usr/share/ca-certificates/259822.pem within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-amd64 -p functional-251000 ssh "sudo cat /usr/share/ca-certificates/259822.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1926: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-amd64 -p functional-251000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.64s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:215: (dbg) Run:  kubectl --context functional-251000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1954: (dbg) Run:  out/minikube-darwin-amd64 -p functional-251000 ssh "sudo systemctl is-active crio"
2023/01/28 10:48:11 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1954: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-251000 ssh "sudo systemctl is-active crio": exit status 1 (428.254482ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:492: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-251000 docker-env) && out/minikube-darwin-amd64 status -p functional-251000"

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv/bash
functional_test.go:492: (dbg) Done: /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-251000 docker-env) && out/minikube-darwin-amd64 status -p functional-251000": (1.047408215s)
functional_test.go:515: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-251000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.70s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2046: (dbg) Run:  out/minikube-darwin-amd64 -p functional-251000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2046: (dbg) Run:  out/minikube-darwin-amd64 -p functional-251000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2046: (dbg) Run:  out/minikube-darwin-amd64 -p functional-251000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:127: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-251000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (18.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:147: (dbg) Run:  kubectl --context functional-251000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [605ea020-1d69-487b-840e-d633baf6ac79] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
E0128 10:47:13.767094   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/addons-582000/client.crt: no such file or directory

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
helpers_test.go:344: "nginx-svc" [605ea020-1d69-487b-840e-d633baf6ac79] Running
E0128 10:47:24.008414   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/addons-582000/client.crt: no such file or directory

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 18.011644672s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (18.30s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-251000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:234: tunnel at http://127.0.0.1 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:369: (dbg) stopping [out/minikube-darwin-amd64 -p functional-251000 tunnel --alsologtostderr] ...
helpers_test.go:502: unable to terminate pid 28121: operation not permitted
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-darwin-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-darwin-amd64 profile list
functional_test.go:1311: Took "427.562783ms" to run "out/minikube-darwin-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-darwin-amd64 profile list -l
functional_test.go:1325: Took "84.503989ms" to run "out/minikube-darwin-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1362: Took "482.577725ms" to run "out/minikube-darwin-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json --light
functional_test.go:1375: Took "95.722384ms" to run "out/minikube-darwin-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (10.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:69: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-251000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdany-port288137351/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:103: wrote "test-1674931678033650000" to /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdany-port288137351/001/created-by-test
functional_test_mount_test.go:103: wrote "test-1674931678033650000" to /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdany-port288137351/001/created-by-test-removed-by-pod
functional_test_mount_test.go:103: wrote "test-1674931678033650000" to /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdany-port288137351/001/test-1674931678033650000
functional_test_mount_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 -p functional-251000 ssh "findmnt -T /mount-9p | grep 9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:111: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-251000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (443.355344ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 -p functional-251000 ssh "findmnt -T /mount-9p | grep 9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:125: (dbg) Run:  out/minikube-darwin-amd64 -p functional-251000 ssh -- ls -la /mount-9p

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:129: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jan 28 18:47 created-by-test
-rw-r--r-- 1 docker docker 24 Jan 28 18:47 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jan 28 18:47 test-1674931678033650000
functional_test_mount_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 -p functional-251000 ssh cat /mount-9p/test-1674931678033650000

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:144: (dbg) Run:  kubectl --context functional-251000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:149: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [227f2db6-b6f5-4303-a9e7-b7f2e5768168] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:344: "busybox-mount" [227f2db6-b6f5-4303-a9e7-b7f2e5768168] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:344: "busybox-mount" [227f2db6-b6f5-4303-a9e7-b7f2e5768168] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [227f2db6-b6f5-4303-a9e7-b7f2e5768168] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:149: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.007465823s
functional_test_mount_test.go:165: (dbg) Run:  kubectl --context functional-251000 logs busybox-mount
functional_test_mount_test.go:177: (dbg) Run:  out/minikube-darwin-amd64 -p functional-251000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:177: (dbg) Run:  out/minikube-darwin-amd64 -p functional-251000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:86: (dbg) Run:  out/minikube-darwin-amd64 -p functional-251000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:90: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-251000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdany-port288137351/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (10.32s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:209: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-251000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdspecific-port2335842200/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 -p functional-251000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-251000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (538.930335ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 -p functional-251000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:253: (dbg) Run:  out/minikube-darwin-amd64 -p functional-251000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:257: guest mount directory contents
total 0
functional_test_mount_test.go:259: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-251000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdspecific-port2335842200/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:260: reading mount text
functional_test_mount_test.go:274: done reading mount text
functional_test_mount_test.go:226: (dbg) Run:  out/minikube-darwin-amd64 -p functional-251000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:226: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-251000 ssh "sudo umount -f /mount-9p": exit status 1 (435.741455ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:228: "out/minikube-darwin-amd64 -p functional-251000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:230: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-251000 /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestFunctionalparallelMountCmdspecific-port2335842200/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.75s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2183: (dbg) Run:  out/minikube-darwin-amd64 -p functional-251000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2197: (dbg) Run:  out/minikube-darwin-amd64 -p functional-251000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:257: (dbg) Run:  out/minikube-darwin-amd64 -p functional-251000 image ls --format short
functional_test.go:262: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-251000 image ls --format short:
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.6
registry.k8s.io/kube-scheduler:v1.26.1
registry.k8s.io/kube-proxy:v1.26.1
registry.k8s.io/kube-controller-manager:v1.26.1
registry.k8s.io/kube-apiserver:v1.26.1
registry.k8s.io/etcd:3.5.6-0
registry.k8s.io/coredns/coredns:v1.9.3
k8s.gcr.io/pause:latest
k8s.gcr.io/pause:3.3
k8s.gcr.io/pause:3.1
k8s.gcr.io/echoserver:1.8
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-251000
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-251000
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:257: (dbg) Run:  out/minikube-darwin-amd64 -p functional-251000 image ls --format table

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:262: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-251000 image ls --format table:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/kube-proxy                  | v1.26.1           | 46a6bb3c77ce0 | 65.6MB |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
| k8s.gcr.io/pause                            | 3.3               | 0184c1613d929 | 683kB  |
| docker.io/library/mysql                     | 5.7               | 9ec14ca3fec4d | 455MB  |
| registry.k8s.io/kube-apiserver              | v1.26.1           | deb04688c4a35 | 134MB  |
| docker.io/library/nginx                     | alpine            | c433c51bbd661 | 40.7MB |
| registry.k8s.io/etcd                        | 3.5.6-0           | fce326961ae2d | 299MB  |
| registry.k8s.io/coredns/coredns             | v1.9.3            | 5185b96f0becf | 48.8MB |
| registry.k8s.io/pause                       | 3.6               | 6270bb605e12e | 683kB  |
| docker.io/library/minikube-local-cache-test | functional-251000 | cfabbfc27690f | 30B    |
| registry.k8s.io/kube-scheduler              | v1.26.1           | 655493523f607 | 56.3MB |
| docker.io/kubernetesui/dashboard            | <none>            | 07655ddf2eebe | 246MB  |
| k8s.gcr.io/pause                            | 3.1               | da86e6ba6ca19 | 742kB  |
| k8s.gcr.io/pause                            | latest            | 350b164e7ae1d | 240kB  |
| registry.k8s.io/kube-controller-manager     | v1.26.1           | e9c08e11b07f6 | 124MB  |
| docker.io/library/nginx                     | latest            | a99a39d070bfd | 142MB  |
| gcr.io/google-containers/addon-resizer      | functional-251000 | ffd4cfbbe753e | 32.9MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| k8s.gcr.io/echoserver                       | 1.8               | 82e4c8a736a4f | 95.4MB |
| docker.io/kubernetesui/metrics-scraper      | <none>            | 115053965e86b | 43.8MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
|---------------------------------------------|-------------------|---------------|--------|
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:257: (dbg) Run:  out/minikube-darwin-amd64 -p functional-251000 image ls --format json
functional_test.go:262: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-251000 image ls --format json:
[{"id":"cfabbfc27690ffaa187659f10c1ed7c638cd50b60fb209b066e1e8483a3f6cef","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-251000"],"size":"30"},{"id":"c433c51bbd66153269da1c592105c9c19bf353e9d7c3d1225ae2bbbeb888cc16","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"40700000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"deb04688c4a3559c313d0023133e3f95b69204f4bff4145265bc85e9672b77f3","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.26.1"],"size":"134000000"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"744000"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"246000000"},{"id":"56cc512116c8f894f11
ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["k8s.gcr.io/pause:latest"],"size":"240000"},{"id":"e9c08e11b07f68c1805c49e4ce66e5a9e6b2d59f6f65041c113b635095a7ad8d","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.26.1"],"size":"124000000"},{"id":"46a6bb3c77ce01ed45ccef835bd95a08ec7ce09d3e2c4f63ed03c2c3b26b70fd","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.26.1"],"size":"65599999"},{"id":"5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.9.3"],"size":"48800000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-251000"],"size":"32900000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe
50ed43e","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.1"],"size":"742000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["k8s.gcr.io/echoserver:1.8"],"size":"95400000"},{"id":"9ec14ca3fec4d86d989ea6ac3f66af44da0298438e1082b0f1682dba5c912fdd","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"455000000"},{"id":"655493523f6076092624c06fd5facf9541a9b3d54e6f3bf5a6e078ee7b1ba44f","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.26.1"],"size":"56300000"},{"id":"a99a39d070bfd1cb60fe65c45dea3a33764dc00a9546bf8dc46cb5a11b1b50e9","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"142000000"},{"id":"fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.6-0"],"size":"299000000"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"
43800000"},{"id":"6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.6"],"size":"683000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.3"],"size":"683000"}]
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:257: (dbg) Run:  out/minikube-darwin-amd64 -p functional-251000 image ls --format yaml

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:262: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-251000 image ls --format yaml:
- id: cfabbfc27690ffaa187659f10c1ed7c638cd50b60fb209b066e1e8483a3f6cef
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-251000
size: "30"
- id: 46a6bb3c77ce01ed45ccef835bd95a08ec7ce09d3e2c4f63ed03c2c3b26b70fd
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.26.1
size: "65599999"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "43800000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.1
size: "742000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.3
size: "683000"
- id: 9ec14ca3fec4d86d989ea6ac3f66af44da0298438e1082b0f1682dba5c912fdd
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "455000000"
- id: 655493523f6076092624c06fd5facf9541a9b3d54e6f3bf5a6e078ee7b1ba44f
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.26.1
size: "56300000"
- id: a99a39d070bfd1cb60fe65c45dea3a33764dc00a9546bf8dc46cb5a11b1b50e9
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "142000000"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "246000000"
- id: 6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.6
size: "683000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-251000
size: "32900000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- k8s.gcr.io/echoserver:1.8
size: "95400000"
- id: c433c51bbd66153269da1c592105c9c19bf353e9d7c3d1225ae2bbbeb888cc16
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "40700000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: deb04688c4a3559c313d0023133e3f95b69204f4bff4145265bc85e9672b77f3
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.26.1
size: "134000000"
- id: e9c08e11b07f68c1805c49e4ce66e5a9e6b2d59f6f65041c113b635095a7ad8d
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.26.1
size: "124000000"
- id: fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.6-0
size: "299000000"
- id: 5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.9.3
size: "48800000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- k8s.gcr.io/pause:latest
size: "240000"

                                                
                                                
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 -p functional-251000 ssh pgrep buildkitd

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-251000 ssh pgrep buildkitd: exit status 1 (426.426771ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 -p functional-251000 image build -t localhost/my-image:functional-251000 testdata/build

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:311: (dbg) Done: out/minikube-darwin-amd64 -p functional-251000 image build -t localhost/my-image:functional-251000 testdata/build: (2.547937665s)
functional_test.go:316: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-251000 image build -t localhost/my-image:functional-251000 testdata/build:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
5cc84ad355aa: Pulling fs layer
5cc84ad355aa: Verifying Checksum
5cc84ad355aa: Download complete
5cc84ad355aa: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> beae173ccac6
Step 2/3 : RUN true
---> Running in 7a7ebfbb9fdd
Removing intermediate container 7a7ebfbb9fdd
---> 8fa576f1ba69
Step 3/3 : ADD content.txt /
---> 1d290b133949
Successfully built 1d290b133949
Successfully tagged localhost/my-image:functional-251000
functional_test.go:444: (dbg) Run:  out/minikube-darwin-amd64 -p functional-251000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (6.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:338: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/Setup
functional_test.go:338: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (6.674592314s)
functional_test.go:343: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-251000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (6.74s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:351: (dbg) Run:  out/minikube-darwin-amd64 -p functional-251000 image load --daemon gcr.io/google-containers/addon-resizer:functional-251000
functional_test.go:351: (dbg) Done: out/minikube-darwin-amd64 -p functional-251000 image load --daemon gcr.io/google-containers/addon-resizer:functional-251000: (2.999125817s)
functional_test.go:444: (dbg) Run:  out/minikube-darwin-amd64 -p functional-251000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:361: (dbg) Run:  out/minikube-darwin-amd64 -p functional-251000 image load --daemon gcr.io/google-containers/addon-resizer:functional-251000
functional_test.go:361: (dbg) Done: out/minikube-darwin-amd64 -p functional-251000 image load --daemon gcr.io/google-containers/addon-resizer:functional-251000: (2.101893928s)
functional_test.go:444: (dbg) Run:  out/minikube-darwin-amd64 -p functional-251000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.42s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:231: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
E0128 10:48:25.449447   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/addons-582000/client.crt: no such file or directory
functional_test.go:231: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.912892319s)
functional_test.go:236: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-251000
functional_test.go:241: (dbg) Run:  out/minikube-darwin-amd64 -p functional-251000 image load --daemon gcr.io/google-containers/addon-resizer:functional-251000
functional_test.go:241: (dbg) Done: out/minikube-darwin-amd64 -p functional-251000 image load --daemon gcr.io/google-containers/addon-resizer:functional-251000: (2.919799445s)
functional_test.go:444: (dbg) Run:  out/minikube-darwin-amd64 -p functional-251000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:376: (dbg) Run:  out/minikube-darwin-amd64 -p functional-251000 image save gcr.io/google-containers/addon-resizer:functional-251000 /Users/jenkins/workspace/addon-resizer-save.tar
functional_test.go:376: (dbg) Done: out/minikube-darwin-amd64 -p functional-251000 image save gcr.io/google-containers/addon-resizer:functional-251000 /Users/jenkins/workspace/addon-resizer-save.tar: (1.268087499s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:388: (dbg) Run:  out/minikube-darwin-amd64 -p functional-251000 image rm gcr.io/google-containers/addon-resizer:functional-251000
functional_test.go:444: (dbg) Run:  out/minikube-darwin-amd64 -p functional-251000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:405: (dbg) Run:  out/minikube-darwin-amd64 -p functional-251000 image load /Users/jenkins/workspace/addon-resizer-save.tar
functional_test.go:405: (dbg) Done: out/minikube-darwin-amd64 -p functional-251000 image load /Users/jenkins/workspace/addon-resizer-save.tar: (1.374197139s)
functional_test.go:444: (dbg) Run:  out/minikube-darwin-amd64 -p functional-251000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.69s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:415: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-251000
functional_test.go:420: (dbg) Run:  out/minikube-darwin-amd64 -p functional-251000 image save --daemon gcr.io/google-containers/addon-resizer:functional-251000
functional_test.go:420: (dbg) Done: out/minikube-darwin-amd64 -p functional-251000 image save --daemon gcr.io/google-containers/addon-resizer:functional-251000: (2.403716784s)
functional_test.go:425: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-251000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.52s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.15s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:186: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:186: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-251000
--- PASS: TestFunctional/delete_addon-resizer_images (0.15s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.06s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:194: (dbg) Run:  docker rmi -f localhost/my-image:functional-251000
--- PASS: TestFunctional/delete_my-image_image (0.06s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.06s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:202: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-251000
--- PASS: TestFunctional/delete_minikube_cached_images (0.06s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (2.19s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:73: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-741000
image_test.go:73: (dbg) Done: out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-741000: (2.191084893s)
--- PASS: TestImageBuild/serial/NormalBuild (2.19s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.92s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:94: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-741000
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.92s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.48s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-741000
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.48s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.42s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:83: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-741000
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.42s)

                                                
                                    
x
+
TestJSONOutput/start/Command (47.97s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-242000 --output=json --user=testUser --memory=2200 --wait=true --driver=docker 
E0128 10:57:03.497149   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/addons-582000/client.crt: no such file or directory
E0128 10:57:07.342603   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/functional-251000/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 start -p json-output-242000 --output=json --user=testUser --memory=2200 --wait=true --driver=docker : (47.967803511s)
--- PASS: TestJSONOutput/start/Command (47.97s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (1.05s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 pause -p json-output-242000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 pause -p json-output-242000 --output=json --user=testUser: (1.051929875s)
--- PASS: TestJSONOutput/pause/Command (1.05s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.61s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 unpause -p json-output-242000 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.61s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.85s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 stop -p json-output-242000 --output=json --user=testUser
E0128 10:57:35.042589   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/functional-251000/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 stop -p json-output-242000 --output=json --user=testUser: (5.847758949s)
--- PASS: TestJSONOutput/stop/Command (5.85s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.76s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-error-998000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p json-output-error-998000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (359.525158ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"8e149fe6-4666-4bde-afa4-571dfba2a98e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-998000] minikube v1.29.0-1674856271-15565 on Darwin 13.2","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"dc89ed08-a6c7-4852-8231-ce8087ce9a39","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=15565"}}
	{"specversion":"1.0","id":"1a760711-eb77-41de-8e82-72579b718353","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/15565-24808/kubeconfig"}}
	{"specversion":"1.0","id":"1a1b9d79-294c-48e7-95c6-e07fdec274c4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"52f631bb-c932-4395-98ff-2ba45f6ef3c6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"5b530310-1d5f-406e-ace3-3333fb2264c6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-24808/.minikube"}}
	{"specversion":"1.0","id":"2450e1e8-c4e1-4616-9c0d-188ab8f8cc3c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"bee7baab-1842-4e14-832a-c0fbc19916d2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-998000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p json-output-error-998000
--- PASS: TestErrorJSONOutput (0.76s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (35.33s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-933000 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-933000 --network=: (32.598090226s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-933000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-933000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-933000: (2.679033139s)
--- PASS: TestKicCustomNetwork/create_custom_network (35.33s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (42.38s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-960000 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-960000 --network=bridge: (39.888473673s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-960000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-960000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-960000: (2.439147669s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (42.38s)

                                                
                                    
x
+
TestKicExistingNetwork (31.99s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-darwin-amd64 start -p existing-network-725000 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-darwin-amd64 start -p existing-network-725000 --network=existing-network: (29.163802162s)
helpers_test.go:175: Cleaning up "existing-network-725000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p existing-network-725000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p existing-network-725000: (2.460488709s)
--- PASS: TestKicExistingNetwork (31.99s)

                                                
                                    
x
+
TestKicCustomSubnet (42.24s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p custom-subnet-861000 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p custom-subnet-861000 --subnet=192.168.60.0/24: (39.562356328s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-861000 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-861000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p custom-subnet-861000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p custom-subnet-861000: (2.615686957s)
--- PASS: TestKicCustomSubnet (42.24s)

                                                
                                    
x
+
TestKicStaticIP (35.38s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 start -p static-ip-679000 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-darwin-amd64 start -p static-ip-679000 --static-ip=192.168.200.200: (32.554933559s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-darwin-amd64 -p static-ip-679000 ip
helpers_test.go:175: Cleaning up "static-ip-679000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p static-ip-679000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p static-ip-679000: (2.573555501s)
--- PASS: TestKicStaticIP (35.38s)

                                                
                                    
x
+
TestMainNoArgs (0.08s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-amd64
--- PASS: TestMainNoArgs (0.08s)

                                                
                                    
x
+
TestMinikubeProfile (74.13s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p first-493000 --driver=docker 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p first-493000 --driver=docker : (35.202688733s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p second-495000 --driver=docker 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p second-495000 --driver=docker : (31.775810607s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile first-493000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile second-495000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-495000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p second-495000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p second-495000: (2.665835407s)
helpers_test.go:175: Cleaning up "first-493000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p first-493000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p first-493000: (2.614116813s)
--- PASS: TestMinikubeProfile (74.13s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (8.23s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-1-454000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker 
E0128 11:02:03.501886   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/addons-582000/client.crt: no such file or directory
E0128 11:02:07.347857   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/functional-251000/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-1-454000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker : (7.225112744s)
--- PASS: TestMountStart/serial/StartWithMountFirst (8.23s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.43s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-1-454000 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.43s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (8.02s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-466000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-466000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker : (7.019291211s)
--- PASS: TestMountStart/serial/StartWithMountSecond (8.02s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.41s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-466000 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.41s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (2.14s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 delete -p mount-start-1-454000 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-darwin-amd64 delete -p mount-start-1-454000 --alsologtostderr -v=5: (2.142603793s)
--- PASS: TestMountStart/serial/DeleteFirst (2.14s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-466000 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.40s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.59s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-darwin-amd64 stop -p mount-start-2-466000
mount_start_test.go:155: (dbg) Done: out/minikube-darwin-amd64 stop -p mount-start-2-466000: (1.58985522s)
--- PASS: TestMountStart/serial/Stop (1.59s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (6.16s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-466000
mount_start_test.go:166: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-466000: (5.154854254s)
--- PASS: TestMountStart/serial/RestartStopped (6.16s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-466000 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.40s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (77.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:83: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-513000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker 
E0128 11:03:26.635187   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/addons-582000/client.crt: no such file or directory
multinode_test.go:83: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-513000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker : (1m16.514663925s)
multinode_test.go:89: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-513000 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (77.23s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (9.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-513000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-513000 -- rollout status deployment/busybox
multinode_test.go:484: (dbg) Done: out/minikube-darwin-amd64 kubectl -p multinode-513000 -- rollout status deployment/busybox: (7.858919149s)
multinode_test.go:490: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-513000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:502: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-513000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:510: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-513000 -- exec busybox-6b86dd6d48-6jd7w -- nslookup kubernetes.io
multinode_test.go:510: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-513000 -- exec busybox-6b86dd6d48-dm6b5 -- nslookup kubernetes.io
multinode_test.go:520: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-513000 -- exec busybox-6b86dd6d48-6jd7w -- nslookup kubernetes.default
multinode_test.go:520: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-513000 -- exec busybox-6b86dd6d48-dm6b5 -- nslookup kubernetes.default
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-513000 -- exec busybox-6b86dd6d48-6jd7w -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-513000 -- exec busybox-6b86dd6d48-dm6b5 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (9.69s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:538: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-513000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-513000 -- exec busybox-6b86dd6d48-6jd7w -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-513000 -- exec busybox-6b86dd6d48-6jd7w -- sh -c "ping -c 1 192.168.65.2"
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-513000 -- exec busybox-6b86dd6d48-dm6b5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-513000 -- exec busybox-6b86dd6d48-dm6b5 -- sh -c "ping -c 1 192.168.65.2"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.91s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (23.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:108: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-513000 -v 3 --alsologtostderr
multinode_test.go:108: (dbg) Done: out/minikube-darwin-amd64 node add -p multinode-513000 -v 3 --alsologtostderr: (22.457633722s)
multinode_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-513000 status --alsologtostderr
multinode_test.go:114: (dbg) Done: out/minikube-darwin-amd64 -p multinode-513000 status --alsologtostderr: (1.190391176s)
--- PASS: TestMultiNode/serial/AddNode (23.65s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:130: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.48s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (14.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-513000 status --output json --alsologtostderr
multinode_test.go:171: (dbg) Done: out/minikube-darwin-amd64 -p multinode-513000 status --output json --alsologtostderr: (1.014100577s)
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-513000 cp testdata/cp-test.txt multinode-513000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-513000 ssh -n multinode-513000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-513000 cp multinode-513000:/home/docker/cp-test.txt /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestMultiNodeserialCopyFile149931903/001/cp-test_multinode-513000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-513000 ssh -n multinode-513000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-513000 cp multinode-513000:/home/docker/cp-test.txt multinode-513000-m02:/home/docker/cp-test_multinode-513000_multinode-513000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-513000 ssh -n multinode-513000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-513000 ssh -n multinode-513000-m02 "sudo cat /home/docker/cp-test_multinode-513000_multinode-513000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-513000 cp multinode-513000:/home/docker/cp-test.txt multinode-513000-m03:/home/docker/cp-test_multinode-513000_multinode-513000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-513000 ssh -n multinode-513000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-513000 ssh -n multinode-513000-m03 "sudo cat /home/docker/cp-test_multinode-513000_multinode-513000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-513000 cp testdata/cp-test.txt multinode-513000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-513000 ssh -n multinode-513000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-513000 cp multinode-513000-m02:/home/docker/cp-test.txt /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestMultiNodeserialCopyFile149931903/001/cp-test_multinode-513000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-513000 ssh -n multinode-513000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-513000 cp multinode-513000-m02:/home/docker/cp-test.txt multinode-513000:/home/docker/cp-test_multinode-513000-m02_multinode-513000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-513000 ssh -n multinode-513000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-513000 ssh -n multinode-513000 "sudo cat /home/docker/cp-test_multinode-513000-m02_multinode-513000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-513000 cp multinode-513000-m02:/home/docker/cp-test.txt multinode-513000-m03:/home/docker/cp-test_multinode-513000-m02_multinode-513000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-513000 ssh -n multinode-513000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-513000 ssh -n multinode-513000-m03 "sudo cat /home/docker/cp-test_multinode-513000-m02_multinode-513000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-513000 cp testdata/cp-test.txt multinode-513000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-513000 ssh -n multinode-513000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-513000 cp multinode-513000-m03:/home/docker/cp-test.txt /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestMultiNodeserialCopyFile149931903/001/cp-test_multinode-513000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-513000 ssh -n multinode-513000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-513000 cp multinode-513000-m03:/home/docker/cp-test.txt multinode-513000:/home/docker/cp-test_multinode-513000-m03_multinode-513000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-513000 ssh -n multinode-513000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-513000 ssh -n multinode-513000 "sudo cat /home/docker/cp-test_multinode-513000-m03_multinode-513000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-513000 cp multinode-513000-m03:/home/docker/cp-test.txt multinode-513000-m02:/home/docker/cp-test_multinode-513000-m03_multinode-513000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-513000 ssh -n multinode-513000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-513000 ssh -n multinode-513000-m02 "sudo cat /home/docker/cp-test_multinode-513000-m03_multinode-513000-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (14.97s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (3.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-513000 node stop m03
multinode_test.go:208: (dbg) Done: out/minikube-darwin-amd64 -p multinode-513000 node stop m03: (1.522889573s)
multinode_test.go:214: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-513000 status
multinode_test.go:214: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-513000 status: exit status 7 (754.786638ms)

                                                
                                                
-- stdout --
	multinode-513000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-513000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-513000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:221: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-513000 status --alsologtostderr
multinode_test.go:221: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-513000 status --alsologtostderr: exit status 7 (761.783621ms)

                                                
                                                
-- stdout --
	multinode-513000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-513000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-513000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0128 11:04:42.241519   32511 out.go:296] Setting OutFile to fd 1 ...
	I0128 11:04:42.241751   32511 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0128 11:04:42.241756   32511 out.go:309] Setting ErrFile to fd 2...
	I0128 11:04:42.241760   32511 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0128 11:04:42.241888   32511 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15565-24808/.minikube/bin
	I0128 11:04:42.242078   32511 out.go:303] Setting JSON to false
	I0128 11:04:42.242102   32511 mustload.go:65] Loading cluster: multinode-513000
	I0128 11:04:42.242142   32511 notify.go:220] Checking for updates...
	I0128 11:04:42.242400   32511 config.go:180] Loaded profile config "multinode-513000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0128 11:04:42.242414   32511 status.go:255] checking status of multinode-513000 ...
	I0128 11:04:42.242802   32511 cli_runner.go:164] Run: docker container inspect multinode-513000 --format={{.State.Status}}
	I0128 11:04:42.299944   32511 status.go:330] multinode-513000 host status = "Running" (err=<nil>)
	I0128 11:04:42.299971   32511 host.go:66] Checking if "multinode-513000" exists ...
	I0128 11:04:42.300214   32511 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-513000
	I0128 11:04:42.358613   32511 host.go:66] Checking if "multinode-513000" exists ...
	I0128 11:04:42.358873   32511 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0128 11:04:42.358942   32511 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-513000
	I0128 11:04:42.416996   32511 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58977 SSHKeyPath:/Users/jenkins/minikube-integration/15565-24808/.minikube/machines/multinode-513000/id_rsa Username:docker}
	I0128 11:04:42.505953   32511 ssh_runner.go:195] Run: systemctl --version
	I0128 11:04:42.510547   32511 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0128 11:04:42.519846   32511 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-513000
	I0128 11:04:42.578172   32511 kubeconfig.go:92] found "multinode-513000" server: "https://127.0.0.1:58976"
	I0128 11:04:42.578202   32511 api_server.go:165] Checking apiserver status ...
	I0128 11:04:42.578240   32511 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0128 11:04:42.588367   32511 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1967/cgroup
	W0128 11:04:42.596549   32511 api_server.go:176] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1967/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0128 11:04:42.596618   32511 ssh_runner.go:195] Run: ls
	I0128 11:04:42.600485   32511 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:58976/healthz ...
	I0128 11:04:42.605105   32511 api_server.go:278] https://127.0.0.1:58976/healthz returned 200:
	ok
	I0128 11:04:42.605118   32511 status.go:421] multinode-513000 apiserver status = Running (err=<nil>)
	I0128 11:04:42.605130   32511 status.go:257] multinode-513000 status: &{Name:multinode-513000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0128 11:04:42.605140   32511 status.go:255] checking status of multinode-513000-m02 ...
	I0128 11:04:42.605368   32511 cli_runner.go:164] Run: docker container inspect multinode-513000-m02 --format={{.State.Status}}
	I0128 11:04:42.664509   32511 status.go:330] multinode-513000-m02 host status = "Running" (err=<nil>)
	I0128 11:04:42.664530   32511 host.go:66] Checking if "multinode-513000-m02" exists ...
	I0128 11:04:42.664781   32511 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-513000-m02
	I0128 11:04:42.722385   32511 host.go:66] Checking if "multinode-513000-m02" exists ...
	I0128 11:04:42.722639   32511 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0128 11:04:42.722695   32511 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-513000-m02
	I0128 11:04:42.781731   32511 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59054 SSHKeyPath:/Users/jenkins/minikube-integration/15565-24808/.minikube/machines/multinode-513000-m02/id_rsa Username:docker}
	I0128 11:04:42.874676   32511 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0128 11:04:42.885876   32511 status.go:257] multinode-513000-m02 status: &{Name:multinode-513000-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0128 11:04:42.885900   32511 status.go:255] checking status of multinode-513000-m03 ...
	I0128 11:04:42.886167   32511 cli_runner.go:164] Run: docker container inspect multinode-513000-m03 --format={{.State.Status}}
	I0128 11:04:42.944894   32511 status.go:330] multinode-513000-m03 host status = "Stopped" (err=<nil>)
	I0128 11:04:42.944915   32511 status.go:343] host is not running, skipping remaining checks
	I0128 11:04:42.944923   32511 status.go:257] multinode-513000-m03 status: &{Name:multinode-513000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (3.04s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (10.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:242: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:252: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-513000 node start m03 --alsologtostderr
multinode_test.go:252: (dbg) Done: out/minikube-darwin-amd64 -p multinode-513000 node start m03 --alsologtostderr: (9.523687352s)
multinode_test.go:259: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-513000 status
multinode_test.go:273: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (10.63s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (87.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-513000
multinode_test.go:288: (dbg) Run:  out/minikube-darwin-amd64 stop -p multinode-513000
multinode_test.go:288: (dbg) Done: out/minikube-darwin-amd64 stop -p multinode-513000: (23.094325514s)
multinode_test.go:293: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-513000 --wait=true -v=8 --alsologtostderr
multinode_test.go:293: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-513000 --wait=true -v=8 --alsologtostderr: (1m3.903465224s)
multinode_test.go:298: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-513000
--- PASS: TestMultiNode/serial/RestartKeepsNodes (87.12s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (6.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:392: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-513000 node delete m03
multinode_test.go:392: (dbg) Done: out/minikube-darwin-amd64 -p multinode-513000 node delete m03: (5.267465362s)
multinode_test.go:398: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-513000 status --alsologtostderr
multinode_test.go:412: (dbg) Run:  docker volume ls
multinode_test.go:422: (dbg) Run:  kubectl get nodes
multinode_test.go:430: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (6.18s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (21.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:312: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-513000 stop
multinode_test.go:312: (dbg) Done: out/minikube-darwin-amd64 -p multinode-513000 stop: (21.604064647s)
multinode_test.go:318: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-513000 status
multinode_test.go:318: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-513000 status: exit status 7 (171.060143ms)

                                                
                                                
-- stdout --
	multinode-513000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-513000-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-513000 status --alsologtostderr
multinode_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-513000 status --alsologtostderr: exit status 7 (170.607048ms)

                                                
                                                
-- stdout --
	multinode-513000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-513000-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0128 11:06:48.725197   33077 out.go:296] Setting OutFile to fd 1 ...
	I0128 11:06:48.725440   33077 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0128 11:06:48.725445   33077 out.go:309] Setting ErrFile to fd 2...
	I0128 11:06:48.725449   33077 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0128 11:06:48.725568   33077 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15565-24808/.minikube/bin
	I0128 11:06:48.725743   33077 out.go:303] Setting JSON to false
	I0128 11:06:48.725768   33077 mustload.go:65] Loading cluster: multinode-513000
	I0128 11:06:48.725798   33077 notify.go:220] Checking for updates...
	I0128 11:06:48.726039   33077 config.go:180] Loaded profile config "multinode-513000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0128 11:06:48.726053   33077 status.go:255] checking status of multinode-513000 ...
	I0128 11:06:48.726418   33077 cli_runner.go:164] Run: docker container inspect multinode-513000 --format={{.State.Status}}
	I0128 11:06:48.782205   33077 status.go:330] multinode-513000 host status = "Stopped" (err=<nil>)
	I0128 11:06:48.782231   33077 status.go:343] host is not running, skipping remaining checks
	I0128 11:06:48.782239   33077 status.go:257] multinode-513000 status: &{Name:multinode-513000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0128 11:06:48.782257   33077 status.go:255] checking status of multinode-513000-m02 ...
	I0128 11:06:48.782501   33077 cli_runner.go:164] Run: docker container inspect multinode-513000-m02 --format={{.State.Status}}
	I0128 11:06:48.838825   33077 status.go:330] multinode-513000-m02 host status = "Stopped" (err=<nil>)
	I0128 11:06:48.838847   33077 status.go:343] host is not running, skipping remaining checks
	I0128 11:06:48.838856   33077 status.go:257] multinode-513000-m02 status: &{Name:multinode-513000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (21.95s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (69.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:342: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:352: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-513000 --wait=true -v=8 --alsologtostderr --driver=docker 
E0128 11:07:03.521203   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/addons-582000/client.crt: no such file or directory
E0128 11:07:07.364138   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/functional-251000/client.crt: no such file or directory
multinode_test.go:352: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-513000 --wait=true -v=8 --alsologtostderr --driver=docker : (1m8.363188875s)
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-513000 status --alsologtostderr
multinode_test.go:372: (dbg) Run:  kubectl get nodes
multinode_test.go:380: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (69.29s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (34.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:441: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-513000
multinode_test.go:450: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-513000-m02 --driver=docker 
multinode_test.go:450: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-513000-m02 --driver=docker : exit status 14 (386.80955ms)

                                                
                                                
-- stdout --
	* [multinode-513000-m02] minikube v1.29.0-1674856271-15565 on Darwin 13.2
	  - MINIKUBE_LOCATION=15565
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15565-24808/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-24808/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-513000-m02' is duplicated with machine name 'multinode-513000-m02' in profile 'multinode-513000'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:458: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-513000-m03 --driver=docker 
multinode_test.go:458: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-513000-m03 --driver=docker : (30.674500165s)
multinode_test.go:465: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-513000
multinode_test.go:465: (dbg) Non-zero exit: out/minikube-darwin-amd64 node add -p multinode-513000: exit status 80 (505.130759ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-513000
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: Node multinode-513000-m03 already exists in multinode-513000-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:470: (dbg) Run:  out/minikube-darwin-amd64 delete -p multinode-513000-m03
E0128 11:08:30.426995   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/functional-251000/client.crt: no such file or directory
multinode_test.go:470: (dbg) Done: out/minikube-darwin-amd64 delete -p multinode-513000-m03: (2.662321959s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (34.29s)

                                                
                                    
x
+
TestPreload (135.51s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-852000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-852000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.24.4: (1m12.701450491s)
preload_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 ssh -p test-preload-852000 -- docker pull gcr.io/k8s-minikube/busybox
preload_test.go:57: (dbg) Done: out/minikube-darwin-amd64 ssh -p test-preload-852000 -- docker pull gcr.io/k8s-minikube/busybox: (7.10510302s)
preload_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 stop -p test-preload-852000
preload_test.go:63: (dbg) Done: out/minikube-darwin-amd64 stop -p test-preload-852000: (10.873907861s)
preload_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-852000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker 
preload_test.go:71: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-852000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker : (41.736965189s)
preload_test.go:80: (dbg) Run:  out/minikube-darwin-amd64 ssh -p test-preload-852000 -- docker images
helpers_test.go:175: Cleaning up "test-preload-852000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p test-preload-852000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p test-preload-852000: (2.662659568s)
--- PASS: TestPreload (135.51s)

                                                
                                    
x
+
TestScheduledStopUnix (109.28s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 start -p scheduled-stop-161000 --memory=2048 --driver=docker 
scheduled_stop_test.go:128: (dbg) Done: out/minikube-darwin-amd64 start -p scheduled-stop-161000 --memory=2048 --driver=docker : (34.978951802s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-161000 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.TimeToStop}} -p scheduled-stop-161000 -n scheduled-stop-161000
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-161000 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-161000 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-161000 -n scheduled-stop-161000
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 status -p scheduled-stop-161000
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-161000 --schedule 15s
E0128 11:12:03.526788   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/addons-582000/client.crt: no such file or directory
E0128 11:12:07.370152   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/functional-251000/client.crt: no such file or directory
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 status -p scheduled-stop-161000
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p scheduled-stop-161000: exit status 7 (117.621007ms)

                                                
                                                
-- stdout --
	scheduled-stop-161000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-161000 -n scheduled-stop-161000
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-161000 -n scheduled-stop-161000: exit status 7 (112.955402ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-161000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p scheduled-stop-161000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p scheduled-stop-161000: (2.335257351s)
--- PASS: TestScheduledStopUnix (109.28s)

                                                
                                    
x
+
TestSkaffold (64.49s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/skaffold.exe2761946011 version
skaffold_test.go:63: skaffold version: v2.1.0
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p skaffold-054000 --memory=2600 --driver=docker 
skaffold_test.go:66: (dbg) Done: out/minikube-darwin-amd64 start -p skaffold-054000 --memory=2600 --driver=docker : (32.070679427s)
skaffold_test.go:86: copying out/minikube-darwin-amd64 to /Users/jenkins/workspace/out/minikube
skaffold_test.go:105: (dbg) Run:  /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/skaffold.exe2761946011 run --minikube-profile skaffold-054000 --kube-context skaffold-054000 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:105: (dbg) Done: /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/skaffold.exe2761946011 run --minikube-profile skaffold-054000 --kube-context skaffold-054000 --status-check=true --port-forward=false --interactive=false: (17.816334959s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-54b956bf9f-b6ggt" [a441542f-9171-4527-86a4-736c5994a997] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 5.012911588s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-5bd694bfd8-vtzgr" [749113d4-7913-4afc-971f-1c51bdd76ae9] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.006343056s
helpers_test.go:175: Cleaning up "skaffold-054000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p skaffold-054000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p skaffold-054000: (2.879073419s)
--- PASS: TestSkaffold (64.49s)

                                                
                                    
x
+
TestInsufficientStorage (14.78s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-darwin-amd64 start -p insufficient-storage-167000 --memory=2048 --output=json --wait=true --driver=docker 
status_test.go:50: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p insufficient-storage-167000 --memory=2048 --output=json --wait=true --driver=docker : exit status 26 (11.543534466s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"9c0e53e7-896c-4fd4-8770-c7ff9a95246f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-167000] minikube v1.29.0-1674856271-15565 on Darwin 13.2","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"e3eeb023-ca9f-4024-a74c-31c5b38bca28","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=15565"}}
	{"specversion":"1.0","id":"02be9fe7-e97c-44cd-a174-f4742a3b2d13","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/15565-24808/kubeconfig"}}
	{"specversion":"1.0","id":"e8f6f2f2-3fed-442a-a883-d366f2eca2d9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"deff8b22-3703-4323-8cf6-7ef99777e1f2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"c5c5bcbd-c3aa-4a8e-a4cf-f9a94f5e0614","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-24808/.minikube"}}
	{"specversion":"1.0","id":"00991b40-d12b-4427-979f-ff09257bb271","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"a8bf67cb-1847-473a-a785-a6ff1f9ce7ce","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"a20a69dd-352f-4efc-a726-0b7380772fb4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"6974921d-60b7-42c0-bc0b-dddf1633f4bb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"8dcecd37-dc09-4cf7-b505-77b55bd5e345","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker Desktop driver with root privileges"}}
	{"specversion":"1.0","id":"d4df6419-3004-4c75-87c4-bfcd487d8db9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-167000 in cluster insufficient-storage-167000","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"a4ffaeb7-ccd9-4dce-b7de-08fd5dc61e35","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"8b8ff73f-0fcc-4407-8388-5e81acf38f64","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"fdea21aa-22e4-44ff-9983-3b7a0f5c6d42","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p insufficient-storage-167000 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p insufficient-storage-167000 --output=json --layout=cluster: exit status 7 (400.792799ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-167000","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.29.0-1674856271-15565","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-167000","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0128 11:13:58.431181   34880 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-167000" does not appear in /Users/jenkins/minikube-integration/15565-24808/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p insufficient-storage-167000 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p insufficient-storage-167000 --output=json --layout=cluster: exit status 7 (397.132744ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-167000","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.29.0-1674856271-15565","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-167000","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0128 11:13:58.829134   34892 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-167000" does not appear in /Users/jenkins/minikube-integration/15565-24808/kubeconfig
	E0128 11:13:58.838133   34892 status.go:559] unable to read event log: stat: stat /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/insufficient-storage-167000/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-167000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p insufficient-storage-167000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p insufficient-storage-167000: (2.441827745s)
--- PASS: TestInsufficientStorage (14.78s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (8.68s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.29.0-1674856271-15565 on darwin
- MINIKUBE_LOCATION=15565
- KUBECONFIG=/Users/jenkins/minikube-integration/15565-24808/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current4176757999/001
* Using the hyperkit driver based on user configuration
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current4176757999/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current4176757999/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/xd/3vdzn10d2gb_wxr7lj_p8h5c0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current4176757999/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Downloading VM boot image ...
* Starting control plane node minikube in cluster minikube
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (8.68s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.96s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.96s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (3.59s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:214: (dbg) Run:  out/minikube-darwin-amd64 logs -p stopped-upgrade-318000
version_upgrade_test.go:214: (dbg) Done: out/minikube-darwin-amd64 logs -p stopped-upgrade-318000: (3.59268379s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (3.59s)

                                                
                                    
x
+
TestPause/serial/Start (45.45s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-637000 --memory=2048 --install-addons=false --wait=all --driver=docker 
E0128 11:21:17.384379   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/skaffold-054000/client.crt: no such file or directory
pause_test.go:80: (dbg) Done: out/minikube-darwin-amd64 start -p pause-637000 --memory=2048 --install-addons=false --wait=all --driver=docker : (45.445144667s)
--- PASS: TestPause/serial/Start (45.45s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (50.91s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-637000 --alsologtostderr -v=1 --driver=docker 
E0128 11:22:03.467046   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/addons-582000/client.crt: no such file or directory
E0128 11:22:07.311160   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/functional-251000/client.crt: no such file or directory
pause_test.go:92: (dbg) Done: out/minikube-darwin-amd64 start -p pause-637000 --alsologtostderr -v=1 --driver=docker : (50.89237555s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (50.91s)

                                                
                                    
x
+
TestPause/serial/Pause (0.68s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 pause -p pause-637000 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.68s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.42s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p pause-637000 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p pause-637000 --output=json --layout=cluster: exit status 2 (417.768424ms)

                                                
                                                
-- stdout --
	{"Name":"pause-637000","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 14 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.29.0-1674856271-15565","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-637000","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.42s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.64s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-darwin-amd64 unpause -p pause-637000 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.64s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.8s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 pause -p pause-637000 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.80s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.65s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 delete -p pause-637000 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-darwin-amd64 delete -p pause-637000 --alsologtostderr -v=5: (2.647960001s)
--- PASS: TestPause/serial/DeletePaused (2.65s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.57s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-637000
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-637000: exit status 1 (53.736354ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such volume: pause-637000

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.57s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.4s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-266000 --no-kubernetes --kubernetes-version=1.20 --driver=docker 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p NoKubernetes-266000 --no-kubernetes --kubernetes-version=1.20 --driver=docker : exit status 14 (402.473134ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-266000] minikube v1.29.0-1674856271-15565 on Darwin 13.2
	  - MINIKUBE_LOCATION=15565
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15565-24808/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15565-24808/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.40s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (35.4s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-266000 --driver=docker 

                                                
                                                
=== CONT  TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-266000 --driver=docker : (34.858807295s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-266000 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (35.40s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (18.65s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-266000 --no-kubernetes --driver=docker 
no_kubernetes_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-266000 --no-kubernetes --driver=docker : (15.848328181s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-266000 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p NoKubernetes-266000 status -o json: exit status 2 (405.79869ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-266000","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-darwin-amd64 delete -p NoKubernetes-266000
no_kubernetes_test.go:124: (dbg) Done: out/minikube-darwin-amd64 delete -p NoKubernetes-266000: (2.393356506s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (18.65s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (7.18s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-266000 --no-kubernetes --driver=docker 
no_kubernetes_test.go:136: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-266000 --no-kubernetes --driver=docker : (7.178676374s)
--- PASS: TestNoKubernetes/serial/Start (7.18s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.39s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-266000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-266000 "sudo systemctl is-active --quiet service kubelet": exit status 1 (388.974637ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.39s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (32.9s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-amd64 profile list
E0128 11:23:33.537075   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/skaffold-054000/client.crt: no such file or directory
no_kubernetes_test.go:169: (dbg) Done: out/minikube-darwin-amd64 profile list: (17.418753505s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-darwin-amd64 profile list --output=json: (15.477202214s)
--- PASS: TestNoKubernetes/serial/ProfileList (32.90s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.6s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-amd64 stop -p NoKubernetes-266000
E0128 11:24:01.227284   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/skaffold-054000/client.crt: no such file or directory
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-amd64 stop -p NoKubernetes-266000: (1.604095286s)
--- PASS: TestNoKubernetes/serial/Stop (1.60s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (4.91s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-266000 --driver=docker 
no_kubernetes_test.go:191: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-266000 --driver=docker : (4.913184127s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (4.91s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.39s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-266000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-266000 "sudo systemctl is-active --quiet service kubelet": exit status 1 (388.803412ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (57.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 start -p auto-732000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker 
net_test.go:111: (dbg) Done: out/minikube-darwin-amd64 start -p auto-732000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker : (57.213129185s)
--- PASS: TestNetworkPlugins/group/auto/Start (57.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 ssh -p auto-732000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (14.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context auto-732000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-52gtv" [43f93c24-877c-4adf-92f9-52dc913a5d5c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0128 11:25:10.373204   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/functional-251000/client.crt: no such file or directory
helpers_test.go:344: "netcat-694fc96674-52gtv" [43f93c24-877c-4adf-92f9-52dc913a5d5c] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 14.009046559s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (14.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:174: (dbg) Run:  kubectl --context auto-732000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:193: (dbg) Run:  kubectl --context auto-732000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:248: (dbg) Run:  kubectl --context auto-732000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (57.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 start -p custom-flannel-732000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker 
net_test.go:111: (dbg) Done: out/minikube-darwin-amd64 start -p custom-flannel-732000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker : (57.277857389s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (57.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 ssh -p custom-flannel-732000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (19.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context custom-flannel-732000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-vb6hj" [629b381e-24d6-481c-b1b3-4136c8bf0c72] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-vb6hj" [629b381e-24d6-481c-b1b3-4136c8bf0c72] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 19.00889838s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (19.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:174: (dbg) Run:  kubectl --context custom-flannel-732000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:193: (dbg) Run:  kubectl --context custom-flannel-732000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:248: (dbg) Run:  kubectl --context custom-flannel-732000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (49.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 start -p false-732000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker 

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/Start
net_test.go:111: (dbg) Done: out/minikube-darwin-amd64 start -p false-732000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker : (49.293368397s)
--- PASS: TestNetworkPlugins/group/false/Start (49.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (52.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 start -p kindnet-732000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker 

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/Start
net_test.go:111: (dbg) Done: out/minikube-darwin-amd64 start -p kindnet-732000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker : (52.52420819s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (52.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 ssh -p false-732000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (24.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context false-732000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-rnwtv" [d8b96ba5-e5dc-4c97-bd28-12026d114af0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0128 11:28:33.536332   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/skaffold-054000/client.crt: no such file or directory
helpers_test.go:344: "netcat-694fc96674-rnwtv" [d8b96ba5-e5dc-4c97-bd28-12026d114af0] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 24.008470912s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (24.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:174: (dbg) Run:  kubectl --context false-732000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:193: (dbg) Run:  kubectl --context false-732000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:248: (dbg) Run:  kubectl --context false-732000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:119: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-gfzqj" [203203b4-c5e1-439c-8f64-f0590ba1192e] Running
net_test.go:119: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.013167762s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 ssh -p kindnet-732000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (19.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context kindnet-732000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-x8z7t" [6379819c-431e-40ac-9ba2-4074252db32c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/NetCatPod
helpers_test.go:344: "netcat-694fc96674-x8z7t" [6379819c-431e-40ac-9ba2-4074252db32c] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:162: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 19.007888169s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (19.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (54.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 start -p flannel-732000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker 

                                                
                                                
=== CONT  TestNetworkPlugins/group/flannel/Start
net_test.go:111: (dbg) Done: out/minikube-darwin-amd64 start -p flannel-732000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker : (54.454394972s)
--- PASS: TestNetworkPlugins/group/flannel/Start (54.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:174: (dbg) Run:  kubectl --context kindnet-732000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:193: (dbg) Run:  kubectl --context kindnet-732000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:248: (dbg) Run:  kubectl --context kindnet-732000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (55.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 start -p enable-default-cni-732000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker 

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:111: (dbg) Done: out/minikube-darwin-amd64 start -p enable-default-cni-732000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker : (55.291342354s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (55.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:119: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-n4twj" [2fef2730-ec08-4d87-8866-add436c569a4] Running
net_test.go:119: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.014528271s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 ssh -p flannel-732000 "pgrep -a kubelet"
E0128 11:30:06.808803   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/auto-732000/client.crt: no such file or directory
E0128 11:30:06.814018   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/auto-732000/client.crt: no such file or directory
E0128 11:30:06.824195   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/auto-732000/client.crt: no such file or directory
E0128 11:30:06.845021   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/auto-732000/client.crt: no such file or directory
E0128 11:30:06.885139   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/auto-732000/client.crt: no such file or directory
E0128 11:30:06.965215   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/auto-732000/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (14.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context flannel-732000 replace --force -f testdata/netcat-deployment.yaml
E0128 11:30:07.125419   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/auto-732000/client.crt: no such file or directory
net_test.go:162: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-6srpg" [7487b3a1-41a1-4950-a156-621710a8fcf2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0128 11:30:07.445897   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/auto-732000/client.crt: no such file or directory
E0128 11:30:08.086086   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/auto-732000/client.crt: no such file or directory
E0128 11:30:09.366289   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/auto-732000/client.crt: no such file or directory
E0128 11:30:11.926745   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/auto-732000/client.crt: no such file or directory
helpers_test.go:344: "netcat-694fc96674-6srpg" [7487b3a1-41a1-4950-a156-621710a8fcf2] Running
E0128 11:30:17.048814   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/auto-732000/client.crt: no such file or directory
net_test.go:162: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 14.006862617s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (14.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:174: (dbg) Run:  kubectl --context flannel-732000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:193: (dbg) Run:  kubectl --context flannel-732000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:248: (dbg) Run:  kubectl --context flannel-732000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 ssh -p enable-default-cni-732000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (19.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context enable-default-cni-732000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-9gb97" [3ab189a1-9af7-4c05-8f8c-0e03c2dab117] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/NetCatPod
helpers_test.go:344: "netcat-694fc96674-9gb97" [3ab189a1-9af7-4c05-8f8c-0e03c2dab117] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:162: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 19.007715569s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (19.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (52.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 start -p bridge-732000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker 
E0128 11:30:47.769532   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/auto-732000/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/Start
net_test.go:111: (dbg) Done: out/minikube-darwin-amd64 start -p bridge-732000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker : (52.429037357s)
--- PASS: TestNetworkPlugins/group/bridge/Start (52.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:174: (dbg) Run:  kubectl --context enable-default-cni-732000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:193: (dbg) Run:  kubectl --context enable-default-cni-732000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:248: (dbg) Run:  kubectl --context enable-default-cni-732000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (52.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 start -p kubenet-732000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker 
E0128 11:31:28.731372   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/auto-732000/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/Start
net_test.go:111: (dbg) Done: out/minikube-darwin-amd64 start -p kubenet-732000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker : (52.090407516s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (52.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 ssh -p bridge-732000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (14.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context bridge-732000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-4rn48" [46e573c5-4fb7-4378-a3b2-0d92cf817b5d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0128 11:31:42.509445   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/custom-flannel-732000/client.crt: no such file or directory
E0128 11:31:42.514518   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/custom-flannel-732000/client.crt: no such file or directory
E0128 11:31:42.524618   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/custom-flannel-732000/client.crt: no such file or directory
E0128 11:31:42.545500   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/custom-flannel-732000/client.crt: no such file or directory
E0128 11:31:42.586505   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/custom-flannel-732000/client.crt: no such file or directory
E0128 11:31:42.666700   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/custom-flannel-732000/client.crt: no such file or directory
E0128 11:31:42.826861   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/custom-flannel-732000/client.crt: no such file or directory
E0128 11:31:43.147060   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/custom-flannel-732000/client.crt: no such file or directory
E0128 11:31:43.787271   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/custom-flannel-732000/client.crt: no such file or directory
E0128 11:31:45.067390   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/custom-flannel-732000/client.crt: no such file or directory
E0128 11:31:47.627654   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/custom-flannel-732000/client.crt: no such file or directory
helpers_test.go:344: "netcat-694fc96674-4rn48" [46e573c5-4fb7-4378-a3b2-0d92cf817b5d] Running
E0128 11:31:52.748094   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/custom-flannel-732000/client.crt: no such file or directory
net_test.go:162: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 14.006740551s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (14.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:174: (dbg) Run:  kubectl --context bridge-732000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:193: (dbg) Run:  kubectl --context bridge-732000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:248: (dbg) Run:  kubectl --context bridge-732000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 ssh -p kubenet-732000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (13.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context kubenet-732000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-xkmdp" [58e418ad-3043-4c19-b02a-de3ce225d16f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/NetCatPod
helpers_test.go:344: "netcat-694fc96674-xkmdp" [58e418ad-3043-4c19-b02a-de3ce225d16f] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:162: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 13.008845045s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (13.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (79.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 start -p calico-732000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker 

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/Start
net_test.go:111: (dbg) Done: out/minikube-darwin-amd64 start -p calico-732000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker : (1m19.307832014s)
--- PASS: TestNetworkPlugins/group/calico/Start (79.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:174: (dbg) Run:  kubectl --context kubenet-732000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:193: (dbg) Run:  kubectl --context kubenet-732000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:248: (dbg) Run:  kubectl --context kubenet-732000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:119: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-nr56p" [6b5f145e-eb81-4c8b-99f5-011c6398f2a2] Running
E0128 11:33:38.983415   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/false-732000/client.crt: no such file or directory
net_test.go:119: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.01774666s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 ssh -p calico-732000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (19.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context calico-732000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-bvbct" [28e32d8f-48bd-46cf-8d25-f652976f3f72] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0128 11:33:46.620263   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/kindnet-732000/client.crt: no such file or directory
E0128 11:33:46.626029   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/kindnet-732000/client.crt: no such file or directory
E0128 11:33:46.636188   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/kindnet-732000/client.crt: no such file or directory
E0128 11:33:46.656328   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/kindnet-732000/client.crt: no such file or directory
E0128 11:33:46.697395   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/kindnet-732000/client.crt: no such file or directory
E0128 11:33:46.778640   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/kindnet-732000/client.crt: no such file or directory
E0128 11:33:46.938846   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/kindnet-732000/client.crt: no such file or directory
E0128 11:33:47.259078   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/kindnet-732000/client.crt: no such file or directory
E0128 11:33:47.899548   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/kindnet-732000/client.crt: no such file or directory
E0128 11:33:49.181414   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/kindnet-732000/client.crt: no such file or directory
E0128 11:33:51.741786   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/kindnet-732000/client.crt: no such file or directory
E0128 11:33:56.861922   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/kindnet-732000/client.crt: no such file or directory
helpers_test.go:344: "netcat-694fc96674-bvbct" [28e32d8f-48bd-46cf-8d25-f652976f3f72] Running
E0128 11:33:59.464447   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/false-732000/client.crt: no such file or directory
net_test.go:162: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 19.007959319s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (19.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:174: (dbg) Run:  kubectl --context calico-732000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:193: (dbg) Run:  kubectl --context calico-732000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:248: (dbg) Run:  kubectl --context calico-732000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (63.93s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p no-preload-337000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.26.1
E0128 11:34:27.583168   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/kindnet-732000/client.crt: no such file or directory
E0128 11:34:40.424732   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/false-732000/client.crt: no such file or directory
E0128 11:34:56.591299   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/skaffold-054000/client.crt: no such file or directory
E0128 11:35:01.605200   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/flannel-732000/client.crt: no such file or directory
E0128 11:35:01.610324   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/flannel-732000/client.crt: no such file or directory
E0128 11:35:01.620780   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/flannel-732000/client.crt: no such file or directory
E0128 11:35:01.640903   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/flannel-732000/client.crt: no such file or directory
E0128 11:35:01.682968   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/flannel-732000/client.crt: no such file or directory
E0128 11:35:01.763072   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/flannel-732000/client.crt: no such file or directory
E0128 11:35:01.923306   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/flannel-732000/client.crt: no such file or directory
E0128 11:35:02.243991   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/flannel-732000/client.crt: no such file or directory
E0128 11:35:02.884237   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/flannel-732000/client.crt: no such file or directory
E0128 11:35:04.164410   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/flannel-732000/client.crt: no such file or directory
E0128 11:35:06.724605   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/flannel-732000/client.crt: no such file or directory
E0128 11:35:06.807834   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/auto-732000/client.crt: no such file or directory
E0128 11:35:08.543499   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/kindnet-732000/client.crt: no such file or directory
E0128 11:35:11.845058   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/flannel-732000/client.crt: no such file or directory
E0128 11:35:22.086842   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/flannel-732000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p no-preload-337000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.26.1: (1m3.925255118s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (63.93s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-337000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [1d2a09a1-97d9-4185-979c-13162f0943dc] Pending
helpers_test.go:344: "busybox" [1d2a09a1-97d9-4185-979c-13162f0943dc] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0128 11:35:32.964549   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/enable-default-cni-732000/client.crt: no such file or directory
E0128 11:35:32.969874   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/enable-default-cni-732000/client.crt: no such file or directory
E0128 11:35:32.980097   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/enable-default-cni-732000/client.crt: no such file or directory
E0128 11:35:33.000207   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/enable-default-cni-732000/client.crt: no such file or directory
E0128 11:35:33.042348   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/enable-default-cni-732000/client.crt: no such file or directory
E0128 11:35:33.122582   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/enable-default-cni-732000/client.crt: no such file or directory
E0128 11:35:33.282740   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/enable-default-cni-732000/client.crt: no such file or directory
E0128 11:35:33.603191   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/enable-default-cni-732000/client.crt: no such file or directory
helpers_test.go:344: "busybox" [1d2a09a1-97d9-4185-979c-13162f0943dc] Running
E0128 11:35:34.243544   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/enable-default-cni-732000/client.crt: no such file or directory
E0128 11:35:34.493642   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/auto-732000/client.crt: no such file or directory
E0128 11:35:35.525376   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/enable-default-cni-732000/client.crt: no such file or directory
E0128 11:35:38.086264   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/enable-default-cni-732000/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.015287487s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-337000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.95s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p no-preload-337000 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-337000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.95s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (10.86s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p no-preload-337000 --alsologtostderr -v=3
E0128 11:35:42.567733   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/flannel-732000/client.crt: no such file or directory
E0128 11:35:43.206438   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/enable-default-cni-732000/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p no-preload-337000 --alsologtostderr -v=3: (10.859323304s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (10.86s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.4s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-337000 -n no-preload-337000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-337000 -n no-preload-337000: exit status 7 (114.663825ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p no-preload-337000 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.40s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (303.63s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p no-preload-337000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.26.1
E0128 11:35:53.447452   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/enable-default-cni-732000/client.crt: no such file or directory
E0128 11:36:02.347214   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/false-732000/client.crt: no such file or directory
E0128 11:36:13.927678   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/enable-default-cni-732000/client.crt: no such file or directory
E0128 11:36:23.528219   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/flannel-732000/client.crt: no such file or directory
E0128 11:36:30.463901   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/kindnet-732000/client.crt: no such file or directory
E0128 11:36:39.811076   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/bridge-732000/client.crt: no such file or directory
E0128 11:36:39.817524   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/bridge-732000/client.crt: no such file or directory
E0128 11:36:39.828073   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/bridge-732000/client.crt: no such file or directory
E0128 11:36:39.848873   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/bridge-732000/client.crt: no such file or directory
E0128 11:36:39.889665   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/bridge-732000/client.crt: no such file or directory
E0128 11:36:39.970780   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/bridge-732000/client.crt: no such file or directory
E0128 11:36:40.130891   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/bridge-732000/client.crt: no such file or directory
E0128 11:36:40.451652   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/bridge-732000/client.crt: no such file or directory
E0128 11:36:41.091937   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/bridge-732000/client.crt: no such file or directory
E0128 11:36:42.372629   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/bridge-732000/client.crt: no such file or directory
E0128 11:36:42.510157   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/custom-flannel-732000/client.crt: no such file or directory
E0128 11:36:44.933247   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/bridge-732000/client.crt: no such file or directory
E0128 11:36:46.602923   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/addons-582000/client.crt: no such file or directory
E0128 11:36:50.053655   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/bridge-732000/client.crt: no such file or directory
E0128 11:36:54.888203   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/enable-default-cni-732000/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p no-preload-337000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.26.1: (5m3.053726469s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-337000 -n no-preload-337000
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (303.63s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (1.58s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p old-k8s-version-182000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p old-k8s-version-182000 --alsologtostderr -v=3: (1.582916985s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (1.58s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.4s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-182000 -n old-k8s-version-182000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-182000 -n old-k8s-version-182000: exit status 7 (114.73324ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p old-k8s-version-182000 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
E0128 11:38:31.202147   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/kubenet-732000/client.crt: no such file or directory
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.40s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (10.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-9df5f" [dfbaaa01-aacf-4dc1-8aa9-6afe536e5736] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0128 11:41:00.649395   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/enable-default-cni-732000/client.crt: no such file or directory
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-9df5f" [dfbaaa01-aacf-4dc1-8aa9-6afe536e5736] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 10.019487288s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (10.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-9df5f" [dfbaaa01-aacf-4dc1-8aa9-6afe536e5736] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006856874s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-337000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.44s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p no-preload-337000 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.44s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p no-preload-337000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-337000 -n no-preload-337000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-337000 -n no-preload-337000: exit status 2 (444.483983ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-337000 -n no-preload-337000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-337000 -n no-preload-337000: exit status 2 (420.816149ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p no-preload-337000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-337000 -n no-preload-337000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-337000 -n no-preload-337000
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (52.99s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p embed-certs-384000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.26.1
E0128 11:41:21.702507   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/calico-732000/client.crt: no such file or directory
E0128 11:41:39.811311   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/bridge-732000/client.crt: no such file or directory
E0128 11:41:42.511328   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/custom-flannel-732000/client.crt: no such file or directory
E0128 11:41:50.378035   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/functional-251000/client.crt: no such file or directory
E0128 11:42:03.471695   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/addons-582000/client.crt: no such file or directory
E0128 11:42:07.314171   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/functional-251000/client.crt: no such file or directory
E0128 11:42:07.500674   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/bridge-732000/client.crt: no such file or directory
E0128 11:42:09.277279   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/kubenet-732000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p embed-certs-384000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.26.1: (52.986993793s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (52.99s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (13.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-384000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [3cbf07d1-c061-4f12-a5c5-efdfb636dc52] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [3cbf07d1-c061-4f12-a5c5-efdfb636dc52] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 13.013866076s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-384000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (13.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.9s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p embed-certs-384000 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-384000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.90s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (10.96s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p embed-certs-384000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p embed-certs-384000 --alsologtostderr -v=3: (10.956972121s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (10.96s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.39s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-384000 -n embed-certs-384000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-384000 -n embed-certs-384000: exit status 7 (114.934342ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p embed-certs-384000 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.39s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (305.42s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p embed-certs-384000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.26.1
E0128 11:42:36.963819   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/kubenet-732000/client.crt: no such file or directory
E0128 11:43:18.501747   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/false-732000/client.crt: no such file or directory
E0128 11:43:33.538669   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/skaffold-054000/client.crt: no such file or directory
E0128 11:43:37.851441   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/calico-732000/client.crt: no such file or directory
E0128 11:43:46.620218   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/kindnet-732000/client.crt: no such file or directory
E0128 11:44:05.543832   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/calico-732000/client.crt: no such file or directory
E0128 11:45:01.605947   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/flannel-732000/client.crt: no such file or directory
E0128 11:45:06.809390   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/auto-732000/client.crt: no such file or directory
E0128 11:45:31.131658   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/no-preload-337000/client.crt: no such file or directory
E0128 11:45:31.137978   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/no-preload-337000/client.crt: no such file or directory
E0128 11:45:31.148163   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/no-preload-337000/client.crt: no such file or directory
E0128 11:45:31.168963   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/no-preload-337000/client.crt: no such file or directory
E0128 11:45:31.209308   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/no-preload-337000/client.crt: no such file or directory
E0128 11:45:31.289432   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/no-preload-337000/client.crt: no such file or directory
E0128 11:45:31.449819   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/no-preload-337000/client.crt: no such file or directory
E0128 11:45:31.770112   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/no-preload-337000/client.crt: no such file or directory
E0128 11:45:32.411582   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/no-preload-337000/client.crt: no such file or directory
E0128 11:45:32.968122   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/enable-default-cni-732000/client.crt: no such file or directory
E0128 11:45:33.691890   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/no-preload-337000/client.crt: no such file or directory
E0128 11:45:36.253414   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/no-preload-337000/client.crt: no such file or directory
E0128 11:45:41.375868   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/no-preload-337000/client.crt: no such file or directory
E0128 11:45:51.618391   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/no-preload-337000/client.crt: no such file or directory
E0128 11:46:12.098896   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/no-preload-337000/client.crt: no such file or directory
E0128 11:46:29.858984   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/auto-732000/client.crt: no such file or directory
E0128 11:46:39.814670   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/bridge-732000/client.crt: no such file or directory
E0128 11:46:42.514813   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/custom-flannel-732000/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p embed-certs-384000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.26.1: (5m4.986399916s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-384000 -n embed-certs-384000
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (305.42s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (21.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-fb4s7" [f6c54f8c-182a-41a0-877f-91285970d7e7] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-fb4s7" [f6c54f8c-182a-41a0-877f-91285970d7e7] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 21.014767668s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (21.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-fb4s7" [f6c54f8c-182a-41a0-877f-91285970d7e7] Running

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.009453413s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-384000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.44s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p embed-certs-384000 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.44s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.35s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p embed-certs-384000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-384000 -n embed-certs-384000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-384000 -n embed-certs-384000: exit status 2 (428.385111ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-384000 -n embed-certs-384000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-384000 -n embed-certs-384000: exit status 2 (434.347824ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p embed-certs-384000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-384000 -n embed-certs-384000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-384000 -n embed-certs-384000
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.35s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (46.77s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p default-k8s-diff-port-404000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.26.1
E0128 11:48:18.505546   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/false-732000/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p default-k8s-diff-port-404000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.26.1: (46.773065917s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (46.77s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-404000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [112276c7-384b-473e-83d6-08aff3713672] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [112276c7-384b-473e-83d6-08aff3713672] Running

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.014363327s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-404000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.83s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p default-k8s-diff-port-404000 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-404000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.83s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (10.98s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p default-k8s-diff-port-404000 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p default-k8s-diff-port-404000 --alsologtostderr -v=3: (10.976222437s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (10.98s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.4s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-404000 -n default-k8s-diff-port-404000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-404000 -n default-k8s-diff-port-404000: exit status 7 (115.598003ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p default-k8s-diff-port-404000 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.40s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (307.69s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p default-k8s-diff-port-404000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.26.1

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p default-k8s-diff-port-404000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.26.1: (5m7.121752695s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-404000 -n default-k8s-diff-port-404000
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (307.69s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (8.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-tnvsl" [b718247f-89c2-40f7-8700-9f9e2274bb33] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-tnvsl" [b718247f-89c2-40f7-8700-9f9e2274bb33] Running

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 8.015503423s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (8.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-tnvsl" [b718247f-89c2-40f7-8700-9f9e2274bb33] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.008183635s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-404000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.44s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p default-k8s-diff-port-404000 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.44s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p default-k8s-diff-port-404000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-404000 -n default-k8s-diff-port-404000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-404000 -n default-k8s-diff-port-404000: exit status 2 (423.501615ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-404000 -n default-k8s-diff-port-404000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-404000 -n default-k8s-diff-port-404000: exit status 2 (419.618995ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p default-k8s-diff-port-404000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-404000 -n default-k8s-diff-port-404000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-404000 -n default-k8s-diff-port-404000
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (44.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p newest-cni-573000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --kubernetes-version=v1.26.1
E0128 11:55:00.909183   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/calico-732000/client.crt: no such file or directory
E0128 11:55:01.609877   25982 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15565-24808/.minikube/profiles/flannel-732000/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p newest-cni-573000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --kubernetes-version=v1.26.1: (44.291884327s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (44.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.91s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p newest-cni-573000 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.91s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (5.8s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p newest-cni-573000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p newest-cni-573000 --alsologtostderr -v=3: (5.804157698s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (5.80s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.4s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-573000 -n newest-cni-573000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-573000 -n newest-cni-573000: exit status 7 (119.531944ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p newest-cni-573000 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.40s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (30.88s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p newest-cni-573000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --kubernetes-version=v1.26.1

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p newest-cni-573000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --kubernetes-version=v1.26.1: (30.440630802s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-573000 -n newest-cni-573000
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (30.88s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.46s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p newest-cni-573000 "sudo crictl images -o json"
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.46s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.79s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p newest-cni-573000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-573000 -n newest-cni-573000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-573000 -n newest-cni-573000: exit status 2 (420.067234ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-573000 -n newest-cni-573000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-573000 -n newest-cni-573000: exit status 2 (422.89834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p newest-cni-573000 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-573000 -n newest-cni-573000

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-573000 -n newest-cni-573000
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.79s)

                                                
                                    

Test skip (18/306)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.1/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.26.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.1/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.26.1/binaries (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.21s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:295: registry stabilized in 9.617179ms
addons_test.go:297: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/Registry
helpers_test.go:344: "registry-46hqw" [f3cdae41-9ddc-41b6-b824-1e3ad41f19be] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:297: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.010618682s
addons_test.go:300: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-4zr79" [c4802b7b-f2e1-49a0-956f-1e1b6dd321b5] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:300: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.012011713s
addons_test.go:305: (dbg) Run:  kubectl --context addons-582000 delete po -l run=registry-test --now
addons_test.go:310: (dbg) Run:  kubectl --context addons-582000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:310: (dbg) Done: kubectl --context addons-582000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.102808133s)
addons_test.go:320: Unable to complete rest of the test due to connectivity assumptions
--- SKIP: TestAddons/parallel/Registry (14.21s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (12.2s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:177: (dbg) Run:  kubectl --context addons-582000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:197: (dbg) Run:  kubectl --context addons-582000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:210: (dbg) Run:  kubectl --context addons-582000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:215: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [ba53141d-e192-43c4-bc13-ec91988b1259] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
helpers_test.go:344: "nginx" [ba53141d-e192-43c4-bc13-ec91988b1259] Running

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:215: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.009265768s
addons_test.go:227: (dbg) Run:  out/minikube-darwin-amd64 -p addons-582000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:247: skipping ingress DNS test for any combination that needs port forwarding
--- SKIP: TestAddons/parallel/Ingress (12.20s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:463: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1559: (dbg) Run:  kubectl --context functional-251000 create deployment hello-node-connect --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1565: (dbg) Run:  kubectl --context functional-251000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1570: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-5cf7cc858f-4jwvn" [50117b66-0836-41d7-ade3-71bdfd0c08c0] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
helpers_test.go:344: "hello-node-connect-5cf7cc858f-4jwvn" [50117b66-0836-41d7-ade3-71bdfd0c08c0] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1570: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.008751364s
functional_test.go:1576: test is broken for port-forwarded drivers: https://github.com/kubernetes/minikube/issues/7383
--- SKIP: TestFunctional/parallel/ServiceCmdConnect (10.12s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:543: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:109: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (6.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:101: Skipping the test as it's interfering with other tests and is outdated
panic.go:522: 
----------------------- debugLogs start: cilium-732000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-732000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-732000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-732000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-732000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-732000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-732000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-732000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-732000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-732000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-732000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-732000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-732000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-732000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-732000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-732000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-732000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-732000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-732000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-732000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-732000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-732000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-732000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-732000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-732000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-732000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-732000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-732000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-732000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-732000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-732000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-732000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-732000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-732000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-732000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-732000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-732000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-732000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-732000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-732000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-732000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-732000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-732000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-732000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-732000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-732000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-732000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-732000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-732000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-732000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-732000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-732000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-732000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-732000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-732000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-732000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-732000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-732000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-732000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-732000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-732000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-732000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-732000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-732000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-732000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-732000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-732000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-732000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-732000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-732000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-732000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-732000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-732000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-732000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-732000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-732000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-732000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-732000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-732000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-732000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-732000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-732000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-732000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-732000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-732000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-732000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-732000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-732000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-732000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-732000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-732000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-732000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-732000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-732000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-732000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-732000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-732000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-732000"

                                                
                                                
----------------------- debugLogs end: cilium-732000 [took: 6.374063504s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-732000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cilium-732000
--- SKIP: TestNetworkPlugins/group/cilium (6.89s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.42s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-244000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p disable-driver-mounts-244000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.42s)

                                                
                                    
Copied to clipboard