Test Report: Docker_macOS 16899

                    
                      f8194aff3a7b98ea29a2e4b2da65132feb1e4119:2023-07-17:30190
                    
                

Test fail (17/317)

x
+
TestFunctional/parallel/License (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-darwin-amd64 license
functional_test.go:2284: (dbg) Non-zero exit: out/minikube-darwin-amd64 license: exit status 40 (355.01983ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to INET_LICENSES: Failed to download licenses: download request did not return a 200, received: 404
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_license_42713f820c0ac68901ecf7b12bfdf24c2cafe65d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2285: command "\n\n" failed: exit status 40
--- FAIL: TestFunctional/parallel/License (0.36s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (266.96s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-darwin-amd64 start -p ingress-addon-legacy-200000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker 
E0717 15:18:17.760846   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/addons-230000/client.crt: no such file or directory
E0717 15:20:33.904210   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/addons-230000/client.crt: no such file or directory
E0717 15:20:45.785277   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/functional-554000/client.crt: no such file or directory
E0717 15:20:45.790392   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/functional-554000/client.crt: no such file or directory
E0717 15:20:45.800547   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/functional-554000/client.crt: no such file or directory
E0717 15:20:45.821337   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/functional-554000/client.crt: no such file or directory
E0717 15:20:45.862094   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/functional-554000/client.crt: no such file or directory
E0717 15:20:45.944349   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/functional-554000/client.crt: no such file or directory
E0717 15:20:46.106555   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/functional-554000/client.crt: no such file or directory
E0717 15:20:46.428704   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/functional-554000/client.crt: no such file or directory
E0717 15:20:47.071044   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/functional-554000/client.crt: no such file or directory
E0717 15:20:48.353368   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/functional-554000/client.crt: no such file or directory
E0717 15:20:50.913987   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/functional-554000/client.crt: no such file or directory
E0717 15:20:56.036383   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/functional-554000/client.crt: no such file or directory
E0717 15:21:01.604996   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/addons-230000/client.crt: no such file or directory
E0717 15:21:06.278861   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/functional-554000/client.crt: no such file or directory
E0717 15:21:26.760175   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/functional-554000/client.crt: no such file or directory
E0717 15:22:07.723464   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/functional-554000/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p ingress-addon-legacy-200000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker : exit status 109 (4m26.913854291s)

                                                
                                                
-- stdout --
	* [ingress-addon-legacy-200000] minikube v1.31.0 on Darwin 13.4.1
	  - MINIKUBE_LOCATION=16899
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16899-76867/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16899-76867/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node ingress-addon-legacy-200000 in cluster ingress-addon-legacy-200000
	* Pulling base image ...
	* Downloading Kubernetes v1.18.20 preload ...
	* Creating docker container (CPUs=2, Memory=4096MB) ...
	* Preparing Kubernetes v1.18.20 on Docker 24.0.4 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 15:18:10.263371   79953 out.go:296] Setting OutFile to fd 1 ...
	I0717 15:18:10.263672   79953 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 15:18:10.263678   79953 out.go:309] Setting ErrFile to fd 2...
	I0717 15:18:10.263682   79953 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 15:18:10.263881   79953 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/16899-76867/.minikube/bin
	I0717 15:18:10.265636   79953 out.go:303] Setting JSON to false
	I0717 15:18:10.285237   79953 start.go:128] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":22658,"bootTime":1689609632,"procs":439,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0717 15:18:10.285335   79953 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0717 15:18:10.306939   79953 out.go:177] * [ingress-addon-legacy-200000] minikube v1.31.0 on Darwin 13.4.1
	I0717 15:18:10.349093   79953 out.go:177]   - MINIKUBE_LOCATION=16899
	I0717 15:18:10.349138   79953 notify.go:220] Checking for updates...
	I0717 15:18:10.391586   79953 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16899-76867/kubeconfig
	I0717 15:18:10.412877   79953 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0717 15:18:10.433773   79953 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 15:18:10.454695   79953 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16899-76867/.minikube
	I0717 15:18:10.475991   79953 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 15:18:10.497909   79953 driver.go:373] Setting default libvirt URI to qemu:///system
	I0717 15:18:10.552787   79953 docker.go:121] docker version: linux-24.0.2:Docker Desktop 4.21.1 (114176)
	I0717 15:18:10.552916   79953 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 15:18:10.653804   79953 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:false NGoroutines:63 SystemTime:2023-07-17 22:18:10.642119725 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:5.15.49-linuxkit-pr OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexS
erverAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=built
in name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Pat
h:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.6] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Command line tool for Docker Scout Vendor:Docker Inc. Version:0.16.1]] Warnings:<n
il>}}
	I0717 15:18:10.695870   79953 out.go:177] * Using the docker driver based on user configuration
	I0717 15:18:10.717751   79953 start.go:298] selected driver: docker
	I0717 15:18:10.717777   79953 start.go:880] validating driver "docker" against <nil>
	I0717 15:18:10.717792   79953 start.go:891] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 15:18:10.721838   79953 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 15:18:10.822730   79953 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:false NGoroutines:63 SystemTime:2023-07-17 22:18:10.809680804 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:5.15.49-linuxkit-pr OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexS
erverAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=built
in name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Pat
h:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.6] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Command line tool for Docker Scout Vendor:Docker Inc. Version:0.16.1]] Warnings:<n
il>}}
	I0717 15:18:10.822928   79953 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0717 15:18:10.823114   79953 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 15:18:10.844682   79953 out.go:177] * Using Docker Desktop driver with root privileges
	I0717 15:18:10.866603   79953 cni.go:84] Creating CNI manager for ""
	I0717 15:18:10.866640   79953 cni.go:168] CNI unnecessary in this configuration, recommending no CNI
	I0717 15:18:10.866659   79953 start_flags.go:319] config:
	{Name:ingress-addon-legacy-200000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-200000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:doc
ker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 15:18:10.909409   79953 out.go:177] * Starting control plane node ingress-addon-legacy-200000 in cluster ingress-addon-legacy-200000
	I0717 15:18:10.930573   79953 cache.go:122] Beginning downloading kic base image for docker with docker
	I0717 15:18:10.951674   79953 out.go:177] * Pulling base image ...
	I0717 15:18:10.993767   79953 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0717 15:18:10.993778   79953 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon
	I0717 15:18:11.045119   79953 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon, skipping pull
	I0717 15:18:11.045142   79953 cache.go:145] gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 exists in daemon, skipping load
	I0717 15:18:11.139404   79953 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
	I0717 15:18:11.139440   79953 cache.go:57] Caching tarball of preloaded images
	I0717 15:18:11.139788   79953 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0717 15:18:11.161715   79953 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0717 15:18:11.183297   79953 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I0717 15:18:11.413347   79953 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4?checksum=md5:ff35f06d4f6c0bac9297b8f85d8ebf70 -> /Users/jenkins/minikube-integration/16899-76867/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
	I0717 15:18:19.678451   79953 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I0717 15:18:19.678627   79953 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/16899-76867/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I0717 15:18:20.300017   79953 cache.go:60] Finished verifying existence of preloaded tar for  v1.18.20 on docker
	I0717 15:18:20.300314   79953 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/ingress-addon-legacy-200000/config.json ...
	I0717 15:18:20.300340   79953 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/ingress-addon-legacy-200000/config.json: {Name:mka151c15376a763c9a879e3fa767844b18f5db4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 15:18:20.300657   79953 cache.go:195] Successfully downloaded all kic artifacts
	I0717 15:18:20.300688   79953 start.go:365] acquiring machines lock for ingress-addon-legacy-200000: {Name:mk859fabd70064b24323214255bac4d8f408dd81 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 15:18:20.300833   79953 start.go:369] acquired machines lock for "ingress-addon-legacy-200000" in 137.675µs
	I0717 15:18:20.300854   79953 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-200000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-200000 Namespace:default APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disabl
eMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0717 15:18:20.300940   79953 start.go:125] createHost starting for "" (driver="docker")
	I0717 15:18:20.322253   79953 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0717 15:18:20.322616   79953 start.go:159] libmachine.API.Create for "ingress-addon-legacy-200000" (driver="docker")
	I0717 15:18:20.322665   79953 client.go:168] LocalClient.Create starting
	I0717 15:18:20.322855   79953 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16899-76867/.minikube/certs/ca.pem
	I0717 15:18:20.322924   79953 main.go:141] libmachine: Decoding PEM data...
	I0717 15:18:20.322958   79953 main.go:141] libmachine: Parsing certificate...
	I0717 15:18:20.323086   79953 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16899-76867/.minikube/certs/cert.pem
	I0717 15:18:20.323140   79953 main.go:141] libmachine: Decoding PEM data...
	I0717 15:18:20.323157   79953 main.go:141] libmachine: Parsing certificate...
	I0717 15:18:20.345616   79953 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-200000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0717 15:18:20.397485   79953 cli_runner.go:211] docker network inspect ingress-addon-legacy-200000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0717 15:18:20.397613   79953 network_create.go:281] running [docker network inspect ingress-addon-legacy-200000] to gather additional debugging logs...
	I0717 15:18:20.397633   79953 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-200000
	W0717 15:18:20.446600   79953 cli_runner.go:211] docker network inspect ingress-addon-legacy-200000 returned with exit code 1
	I0717 15:18:20.446638   79953 network_create.go:284] error running [docker network inspect ingress-addon-legacy-200000]: docker network inspect ingress-addon-legacy-200000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ingress-addon-legacy-200000 not found
	I0717 15:18:20.446655   79953 network_create.go:286] output of [docker network inspect ingress-addon-legacy-200000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ingress-addon-legacy-200000 not found
	
	** /stderr **
	I0717 15:18:20.446766   79953 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0717 15:18:20.497463   79953 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000fb1360}
	I0717 15:18:20.497506   79953 network_create.go:123] attempt to create docker network ingress-addon-legacy-200000 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 65535 ...
	I0717 15:18:20.497576   79953 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-200000 ingress-addon-legacy-200000
	I0717 15:18:20.579443   79953 network_create.go:107] docker network ingress-addon-legacy-200000 192.168.49.0/24 created
	I0717 15:18:20.579483   79953 kic.go:117] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-200000" container
	I0717 15:18:20.579625   79953 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0717 15:18:20.630052   79953 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-200000 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-200000 --label created_by.minikube.sigs.k8s.io=true
	I0717 15:18:20.682835   79953 oci.go:103] Successfully created a docker volume ingress-addon-legacy-200000
	I0717 15:18:20.682994   79953 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-200000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-200000 --entrypoint /usr/bin/test -v ingress-addon-legacy-200000:/var gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -d /var/lib
	I0717 15:18:21.147647   79953 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-200000
	I0717 15:18:21.147693   79953 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0717 15:18:21.147710   79953 kic.go:190] Starting extracting preloaded images to volume ...
	I0717 15:18:21.147822   79953 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/16899-76867/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-200000:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -I lz4 -xf /preloaded.tar -C /extractDir
	I0717 15:18:24.085957   79953 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/16899-76867/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-200000:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -I lz4 -xf /preloaded.tar -C /extractDir: (2.937881436s)
	I0717 15:18:24.085987   79953 kic.go:199] duration metric: took 2.938206 seconds to extract preloaded images to volume
	I0717 15:18:24.086106   79953 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0717 15:18:24.185993   79953 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-200000 --name ingress-addon-legacy-200000 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-200000 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-200000 --network ingress-addon-legacy-200000 --ip 192.168.49.2 --volume ingress-addon-legacy-200000:/var --security-opt apparmor=unconfined --memory=4096mb --memory-swap=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631
	I0717 15:18:24.446499   79953 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-200000 --format={{.State.Running}}
	I0717 15:18:24.499805   79953 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-200000 --format={{.State.Status}}
	I0717 15:18:24.557138   79953 cli_runner.go:164] Run: docker exec ingress-addon-legacy-200000 stat /var/lib/dpkg/alternatives/iptables
	I0717 15:18:24.689728   79953 oci.go:144] the created container "ingress-addon-legacy-200000" has a running status.
	I0717 15:18:24.689796   79953 kic.go:221] Creating ssh key for kic: /Users/jenkins/minikube-integration/16899-76867/.minikube/machines/ingress-addon-legacy-200000/id_rsa...
	I0717 15:18:24.748577   79953 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16899-76867/.minikube/machines/ingress-addon-legacy-200000/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0717 15:18:24.748645   79953 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/16899-76867/.minikube/machines/ingress-addon-legacy-200000/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0717 15:18:24.815764   79953 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-200000 --format={{.State.Status}}
	I0717 15:18:24.869232   79953 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0717 15:18:24.869254   79953 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-200000 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0717 15:18:24.967524   79953 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-200000 --format={{.State.Status}}
	I0717 15:18:25.020932   79953 machine.go:88] provisioning docker machine ...
	I0717 15:18:25.020976   79953 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-200000"
	I0717 15:18:25.021084   79953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-200000
	I0717 15:18:25.075223   79953 main.go:141] libmachine: Using SSH client type: native
	I0717 15:18:25.075615   79953 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140d2c0] 0x1410360 <nil>  [] 0s} 127.0.0.1 53966 <nil> <nil>}
	I0717 15:18:25.075631   79953 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-200000 && echo "ingress-addon-legacy-200000" | sudo tee /etc/hostname
	I0717 15:18:25.217498   79953 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-200000
	
	I0717 15:18:25.217595   79953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-200000
	I0717 15:18:25.279808   79953 main.go:141] libmachine: Using SSH client type: native
	I0717 15:18:25.280153   79953 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140d2c0] 0x1410360 <nil>  [] 0s} 127.0.0.1 53966 <nil> <nil>}
	I0717 15:18:25.280169   79953 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-200000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-200000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-200000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 15:18:25.408961   79953 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 15:18:25.408987   79953 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/16899-76867/.minikube CaCertPath:/Users/jenkins/minikube-integration/16899-76867/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/16899-76867/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/16899-76867/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/16899-76867/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/16899-76867/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/16899-76867/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/16899-76867/.minikube}
	I0717 15:18:25.409009   79953 ubuntu.go:177] setting up certificates
	I0717 15:18:25.409018   79953 provision.go:83] configureAuth start
	I0717 15:18:25.409097   79953 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-200000
	I0717 15:18:25.459689   79953 provision.go:138] copyHostCerts
	I0717 15:18:25.459740   79953 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16899-76867/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/16899-76867/.minikube/cert.pem
	I0717 15:18:25.459805   79953 exec_runner.go:144] found /Users/jenkins/minikube-integration/16899-76867/.minikube/cert.pem, removing ...
	I0717 15:18:25.459812   79953 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/16899-76867/.minikube/cert.pem
	I0717 15:18:25.459936   79953 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16899-76867/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/16899-76867/.minikube/cert.pem (1123 bytes)
	I0717 15:18:25.460133   79953 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16899-76867/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/16899-76867/.minikube/key.pem
	I0717 15:18:25.460176   79953 exec_runner.go:144] found /Users/jenkins/minikube-integration/16899-76867/.minikube/key.pem, removing ...
	I0717 15:18:25.460183   79953 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/16899-76867/.minikube/key.pem
	I0717 15:18:25.460303   79953 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16899-76867/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/16899-76867/.minikube/key.pem (1675 bytes)
	I0717 15:18:25.460440   79953 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16899-76867/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/16899-76867/.minikube/ca.pem
	I0717 15:18:25.460475   79953 exec_runner.go:144] found /Users/jenkins/minikube-integration/16899-76867/.minikube/ca.pem, removing ...
	I0717 15:18:25.460480   79953 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/16899-76867/.minikube/ca.pem
	I0717 15:18:25.460541   79953 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16899-76867/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/16899-76867/.minikube/ca.pem (1078 bytes)
	I0717 15:18:25.460669   79953 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/16899-76867/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/16899-76867/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/16899-76867/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-200000 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-200000]
	I0717 15:18:25.618220   79953 provision.go:172] copyRemoteCerts
	I0717 15:18:25.618287   79953 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 15:18:25.618390   79953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-200000
	I0717 15:18:25.671336   79953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53966 SSHKeyPath:/Users/jenkins/minikube-integration/16899-76867/.minikube/machines/ingress-addon-legacy-200000/id_rsa Username:docker}
	I0717 15:18:25.765463   79953 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16899-76867/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0717 15:18:25.765536   79953 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 15:18:25.787204   79953 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16899-76867/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0717 15:18:25.787290   79953 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I0717 15:18:25.809064   79953 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16899-76867/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0717 15:18:25.809206   79953 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 15:18:25.831071   79953 provision.go:86] duration metric: configureAuth took 422.028022ms
	I0717 15:18:25.831087   79953 ubuntu.go:193] setting minikube options for container-runtime
	I0717 15:18:25.831273   79953 config.go:182] Loaded profile config "ingress-addon-legacy-200000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0717 15:18:25.831340   79953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-200000
	I0717 15:18:25.883919   79953 main.go:141] libmachine: Using SSH client type: native
	I0717 15:18:25.884281   79953 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140d2c0] 0x1410360 <nil>  [] 0s} 127.0.0.1 53966 <nil> <nil>}
	I0717 15:18:25.884300   79953 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0717 15:18:26.014036   79953 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0717 15:18:26.014052   79953 ubuntu.go:71] root file system type: overlay
	I0717 15:18:26.014165   79953 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0717 15:18:26.014273   79953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-200000
	I0717 15:18:26.064945   79953 main.go:141] libmachine: Using SSH client type: native
	I0717 15:18:26.065299   79953 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140d2c0] 0x1410360 <nil>  [] 0s} 127.0.0.1 53966 <nil> <nil>}
	I0717 15:18:26.065359   79953 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0717 15:18:26.206010   79953 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0717 15:18:26.206100   79953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-200000
	I0717 15:18:26.257455   79953 main.go:141] libmachine: Using SSH client type: native
	I0717 15:18:26.257819   79953 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140d2c0] 0x1410360 <nil>  [] 0s} 127.0.0.1 53966 <nil> <nil>}
	I0717 15:18:26.257839   79953 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0717 15:18:26.894086   79953 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2023-07-07 14:50:55.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-07-17 22:18:26.204128467 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0717 15:18:26.894107   79953 machine.go:91] provisioned docker machine in 1.873109257s
	I0717 15:18:26.894131   79953 client.go:171] LocalClient.Create took 6.571307833s
	I0717 15:18:26.894169   79953 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-200000" took 6.571404312s
	I0717 15:18:26.894179   79953 start.go:300] post-start starting for "ingress-addon-legacy-200000" (driver="docker")
	I0717 15:18:26.894205   79953 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 15:18:26.894311   79953 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 15:18:26.894385   79953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-200000
	I0717 15:18:26.946753   79953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53966 SSHKeyPath:/Users/jenkins/minikube-integration/16899-76867/.minikube/machines/ingress-addon-legacy-200000/id_rsa Username:docker}
	I0717 15:18:27.041329   79953 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 15:18:27.045300   79953 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0717 15:18:27.045325   79953 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0717 15:18:27.045336   79953 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0717 15:18:27.045341   79953 info.go:137] Remote host: Ubuntu 22.04.2 LTS
	I0717 15:18:27.045350   79953 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16899-76867/.minikube/addons for local assets ...
	I0717 15:18:27.045431   79953 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16899-76867/.minikube/files for local assets ...
	I0717 15:18:27.045596   79953 filesync.go:149] local asset: /Users/jenkins/minikube-integration/16899-76867/.minikube/files/etc/ssl/certs/773242.pem -> 773242.pem in /etc/ssl/certs
	I0717 15:18:27.045603   79953 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16899-76867/.minikube/files/etc/ssl/certs/773242.pem -> /etc/ssl/certs/773242.pem
	I0717 15:18:27.045797   79953 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 15:18:27.054503   79953 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/files/etc/ssl/certs/773242.pem --> /etc/ssl/certs/773242.pem (1708 bytes)
	I0717 15:18:27.075514   79953 start.go:303] post-start completed in 181.321904ms
	I0717 15:18:27.076131   79953 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-200000
	I0717 15:18:27.128197   79953 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/ingress-addon-legacy-200000/config.json ...
	I0717 15:18:27.128649   79953 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 15:18:27.128708   79953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-200000
	I0717 15:18:27.181202   79953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53966 SSHKeyPath:/Users/jenkins/minikube-integration/16899-76867/.minikube/machines/ingress-addon-legacy-200000/id_rsa Username:docker}
	I0717 15:18:27.271796   79953 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0717 15:18:27.277042   79953 start.go:128] duration metric: createHost completed in 6.975933082s
	I0717 15:18:27.277060   79953 start.go:83] releasing machines lock for "ingress-addon-legacy-200000", held for 6.976056959s
	I0717 15:18:27.277138   79953 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-200000
	I0717 15:18:27.327628   79953 ssh_runner.go:195] Run: cat /version.json
	I0717 15:18:27.327664   79953 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 15:18:27.327710   79953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-200000
	I0717 15:18:27.327742   79953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-200000
	I0717 15:18:27.383568   79953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53966 SSHKeyPath:/Users/jenkins/minikube-integration/16899-76867/.minikube/machines/ingress-addon-legacy-200000/id_rsa Username:docker}
	I0717 15:18:27.383569   79953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53966 SSHKeyPath:/Users/jenkins/minikube-integration/16899-76867/.minikube/machines/ingress-addon-legacy-200000/id_rsa Username:docker}
	I0717 15:18:27.578790   79953 ssh_runner.go:195] Run: systemctl --version
	I0717 15:18:27.583914   79953 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0717 15:18:27.589352   79953 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0717 15:18:27.612681   79953 cni.go:236] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0717 15:18:27.612804   79953 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0717 15:18:27.630713   79953 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0717 15:18:27.648007   79953 cni.go:314] configured [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 15:18:27.648022   79953 start.go:466] detecting cgroup driver to use...
	I0717 15:18:27.648036   79953 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0717 15:18:27.648157   79953 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 15:18:27.663686   79953 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I0717 15:18:27.673507   79953 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0717 15:18:27.683635   79953 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0717 15:18:27.683701   79953 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0717 15:18:27.693759   79953 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0717 15:18:27.703692   79953 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0717 15:18:27.713603   79953 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0717 15:18:27.723643   79953 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 15:18:27.733037   79953 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0717 15:18:27.742689   79953 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 15:18:27.751709   79953 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 15:18:27.760269   79953 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 15:18:27.832016   79953 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0717 15:18:27.911688   79953 start.go:466] detecting cgroup driver to use...
	I0717 15:18:27.911708   79953 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0717 15:18:27.911831   79953 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0717 15:18:27.924524   79953 cruntime.go:276] skipping containerd shutdown because we are bound to it
	I0717 15:18:27.924601   79953 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0717 15:18:27.938520   79953 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 15:18:27.973066   79953 ssh_runner.go:195] Run: which cri-dockerd
	I0717 15:18:27.978968   79953 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0717 15:18:27.988578   79953 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0717 15:18:28.006115   79953 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0717 15:18:28.114106   79953 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0717 15:18:28.207483   79953 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0717 15:18:28.207500   79953 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0717 15:18:28.224619   79953 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 15:18:28.317606   79953 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0717 15:18:28.558241   79953 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0717 15:18:28.583975   79953 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0717 15:18:28.655453   79953 out.go:204] * Preparing Kubernetes v1.18.20 on Docker 24.0.4 ...
	I0717 15:18:28.655644   79953 cli_runner.go:164] Run: docker exec -t ingress-addon-legacy-200000 dig +short host.docker.internal
	I0717 15:18:28.770759   79953 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I0717 15:18:28.770894   79953 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I0717 15:18:28.775973   79953 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 15:18:28.787672   79953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" ingress-addon-legacy-200000
	I0717 15:18:28.842051   79953 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0717 15:18:28.842134   79953 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0717 15:18:28.862208   79953 docker.go:636] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I0717 15:18:28.862222   79953 docker.go:642] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
	I0717 15:18:28.862291   79953 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0717 15:18:28.871678   79953 ssh_runner.go:195] Run: which lz4
	I0717 15:18:28.876172   79953 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16899-76867/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0717 15:18:28.876354   79953 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0717 15:18:28.880601   79953 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 15:18:28.880623   79953 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (424164442 bytes)
	I0717 15:18:34.534068   79953 docker.go:600] Took 5.657655 seconds to copy over tarball
	I0717 15:18:34.534184   79953 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 15:18:36.638788   79953 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.104489096s)
	I0717 15:18:36.638820   79953 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 15:18:36.700699   79953 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0717 15:18:36.711147   79953 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2502 bytes)
	I0717 15:18:36.727419   79953 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 15:18:36.797348   79953 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0717 15:18:37.769380   79953 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0717 15:18:37.790124   79953 docker.go:636] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I0717 15:18:37.790143   79953 docker.go:642] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
	I0717 15:18:37.790151   79953 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0717 15:18:37.796183   79953 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I0717 15:18:37.796199   79953 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I0717 15:18:37.796233   79953 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 15:18:37.796240   79953 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0717 15:18:37.796174   79953 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I0717 15:18:37.796290   79953 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0717 15:18:37.796387   79953 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I0717 15:18:37.796423   79953 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I0717 15:18:37.801836   79953 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I0717 15:18:37.802109   79953 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I0717 15:18:37.802128   79953 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I0717 15:18:37.802177   79953 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I0717 15:18:37.804591   79953 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0717 15:18:37.804653   79953 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 15:18:37.804713   79953 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0717 15:18:37.804883   79953 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I0717 15:18:38.949446   79953 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	I0717 15:18:38.970794   79953 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1" in container runtime
	I0717 15:18:38.970840   79953 docker.go:316] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I0717 15:18:38.970889   79953 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.18.20
	I0717 15:18:38.991711   79953 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/16899-76867/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.20
	I0717 15:18:39.116992   79953 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	I0717 15:18:39.138604   79953 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba" in container runtime
	I0717 15:18:39.138631   79953 docker.go:316] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I0717 15:18:39.138689   79953 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.18.20
	I0717 15:18:39.160647   79953 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/16899-76867/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.20
	I0717 15:18:39.331678   79953 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I0717 15:18:39.353615   79953 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5" in container runtime
	I0717 15:18:39.353639   79953 docker.go:316] Removing image: registry.k8s.io/coredns:1.6.7
	I0717 15:18:39.353694   79953 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.6.7
	I0717 15:18:39.374330   79953 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/16899-76867/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7
	I0717 15:18:39.402883   79953 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	I0717 15:18:39.424395   79953 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346" in container runtime
	I0717 15:18:39.424425   79953 docker.go:316] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I0717 15:18:39.424478   79953 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.18.20
	I0717 15:18:39.445371   79953 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/16899-76867/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.20
	I0717 15:18:39.635644   79953 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0717 15:18:39.656793   79953 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0717 15:18:39.656822   79953 docker.go:316] Removing image: registry.k8s.io/pause:3.2
	I0717 15:18:39.656884   79953 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.2
	I0717 15:18:39.676102   79953 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/16899-76867/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0717 15:18:40.487868   79953 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 15:18:40.669040   79953 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	I0717 15:18:40.689208   79953 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290" in container runtime
	I0717 15:18:40.689233   79953 docker.go:316] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0717 15:18:40.689305   79953 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I0717 15:18:40.708575   79953 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/16899-76867/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.20
	I0717 15:18:40.778721   79953 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I0717 15:18:40.799678   79953 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f" in container runtime
	I0717 15:18:40.799720   79953 docker.go:316] Removing image: registry.k8s.io/etcd:3.4.3-0
	I0717 15:18:40.799854   79953 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.4.3-0
	I0717 15:18:40.818415   79953 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/16899-76867/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
	I0717 15:18:40.818463   79953 cache_images.go:92] LoadImages completed in 3.028233451s
	W0717 15:18:40.818522   79953 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/16899-76867/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.20: no such file or directory
	X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/16899-76867/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.20: no such file or directory
	I0717 15:18:40.818606   79953 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0717 15:18:40.870756   79953 cni.go:84] Creating CNI manager for ""
	I0717 15:18:40.870773   79953 cni.go:168] CNI unnecessary in this configuration, recommending no CNI
	I0717 15:18:40.870790   79953 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0717 15:18:40.870807   79953 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-200000 NodeName:ingress-addon-legacy-200000 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0717 15:18:40.870921   79953 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "ingress-addon-legacy-200000"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 15:18:40.870999   79953 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=ingress-addon-legacy-200000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-200000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0717 15:18:40.871066   79953 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I0717 15:18:40.880203   79953 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 15:18:40.880260   79953 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 15:18:40.889435   79953 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (354 bytes)
	I0717 15:18:40.906324   79953 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I0717 15:18:40.923266   79953 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2124 bytes)
	I0717 15:18:40.940348   79953 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0717 15:18:40.945140   79953 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 15:18:40.956403   79953 certs.go:56] Setting up /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/ingress-addon-legacy-200000 for IP: 192.168.49.2
	I0717 15:18:40.956422   79953 certs.go:190] acquiring lock for shared ca certs: {Name:mk8dc1f2afa352f9c2168154d4ab47beda1b6a37 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 15:18:40.956608   79953 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/16899-76867/.minikube/ca.key
	I0717 15:18:40.956676   79953 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/16899-76867/.minikube/proxy-client-ca.key
	I0717 15:18:40.956719   79953 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/ingress-addon-legacy-200000/client.key
	I0717 15:18:40.956731   79953 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/ingress-addon-legacy-200000/client.crt with IP's: []
	I0717 15:18:41.282886   79953 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/ingress-addon-legacy-200000/client.crt ...
	I0717 15:18:41.282899   79953 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/ingress-addon-legacy-200000/client.crt: {Name:mk8583af8f7a4a07c284bd9e0d44552dc9e77e59 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 15:18:41.283214   79953 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/ingress-addon-legacy-200000/client.key ...
	I0717 15:18:41.283228   79953 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/ingress-addon-legacy-200000/client.key: {Name:mkff9c9f5c25ae07aa62a640cb4db82b50630d55 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 15:18:41.283447   79953 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/ingress-addon-legacy-200000/apiserver.key.dd3b5fb2
	I0717 15:18:41.283464   79953 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/ingress-addon-legacy-200000/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0717 15:18:41.596842   79953 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/ingress-addon-legacy-200000/apiserver.crt.dd3b5fb2 ...
	I0717 15:18:41.596854   79953 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/ingress-addon-legacy-200000/apiserver.crt.dd3b5fb2: {Name:mkd9346b9402b674c383ecbd5793c0294a2b1d70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 15:18:41.597136   79953 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/ingress-addon-legacy-200000/apiserver.key.dd3b5fb2 ...
	I0717 15:18:41.597144   79953 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/ingress-addon-legacy-200000/apiserver.key.dd3b5fb2: {Name:mk3cebce840e21d3db1bf10a58cbf493a5a69343 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 15:18:41.597369   79953 certs.go:337] copying /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/ingress-addon-legacy-200000/apiserver.crt.dd3b5fb2 -> /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/ingress-addon-legacy-200000/apiserver.crt
	I0717 15:18:41.597535   79953 certs.go:341] copying /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/ingress-addon-legacy-200000/apiserver.key.dd3b5fb2 -> /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/ingress-addon-legacy-200000/apiserver.key
	I0717 15:18:41.597695   79953 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/ingress-addon-legacy-200000/proxy-client.key
	I0717 15:18:41.597714   79953 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/ingress-addon-legacy-200000/proxy-client.crt with IP's: []
	I0717 15:18:41.690245   79953 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/ingress-addon-legacy-200000/proxy-client.crt ...
	I0717 15:18:41.690254   79953 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/ingress-addon-legacy-200000/proxy-client.crt: {Name:mkbc1a9f2e6f1a1758e2d2825f8995f1c412d7b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 15:18:41.690485   79953 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/ingress-addon-legacy-200000/proxy-client.key ...
	I0717 15:18:41.690493   79953 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/ingress-addon-legacy-200000/proxy-client.key: {Name:mk37851c02473bf2d2f582d2dc3001df1e26346e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 15:18:41.690680   79953 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/ingress-addon-legacy-200000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0717 15:18:41.690710   79953 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/ingress-addon-legacy-200000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0717 15:18:41.690731   79953 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/ingress-addon-legacy-200000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0717 15:18:41.690751   79953 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/ingress-addon-legacy-200000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0717 15:18:41.690775   79953 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16899-76867/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0717 15:18:41.690794   79953 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16899-76867/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0717 15:18:41.690812   79953 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16899-76867/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0717 15:18:41.690830   79953 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16899-76867/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0717 15:18:41.690919   79953 certs.go:437] found cert: /Users/jenkins/minikube-integration/16899-76867/.minikube/certs/Users/jenkins/minikube-integration/16899-76867/.minikube/certs/77324.pem (1338 bytes)
	W0717 15:18:41.690971   79953 certs.go:433] ignoring /Users/jenkins/minikube-integration/16899-76867/.minikube/certs/Users/jenkins/minikube-integration/16899-76867/.minikube/certs/77324_empty.pem, impossibly tiny 0 bytes
	I0717 15:18:41.690983   79953 certs.go:437] found cert: /Users/jenkins/minikube-integration/16899-76867/.minikube/certs/Users/jenkins/minikube-integration/16899-76867/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 15:18:41.691015   79953 certs.go:437] found cert: /Users/jenkins/minikube-integration/16899-76867/.minikube/certs/Users/jenkins/minikube-integration/16899-76867/.minikube/certs/ca.pem (1078 bytes)
	I0717 15:18:41.691048   79953 certs.go:437] found cert: /Users/jenkins/minikube-integration/16899-76867/.minikube/certs/Users/jenkins/minikube-integration/16899-76867/.minikube/certs/cert.pem (1123 bytes)
	I0717 15:18:41.691079   79953 certs.go:437] found cert: /Users/jenkins/minikube-integration/16899-76867/.minikube/certs/Users/jenkins/minikube-integration/16899-76867/.minikube/certs/key.pem (1675 bytes)
	I0717 15:18:41.691151   79953 certs.go:437] found cert: /Users/jenkins/minikube-integration/16899-76867/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/16899-76867/.minikube/files/etc/ssl/certs/773242.pem (1708 bytes)
	I0717 15:18:41.691187   79953 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16899-76867/.minikube/files/etc/ssl/certs/773242.pem -> /usr/share/ca-certificates/773242.pem
	I0717 15:18:41.691208   79953 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16899-76867/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0717 15:18:41.691229   79953 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16899-76867/.minikube/certs/77324.pem -> /usr/share/ca-certificates/77324.pem
	I0717 15:18:41.691725   79953 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/ingress-addon-legacy-200000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0717 15:18:41.715262   79953 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/ingress-addon-legacy-200000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0717 15:18:41.738244   79953 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/ingress-addon-legacy-200000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 15:18:41.761128   79953 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/ingress-addon-legacy-200000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 15:18:41.782903   79953 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 15:18:41.805171   79953 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 15:18:41.827368   79953 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 15:18:41.849377   79953 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0717 15:18:41.871681   79953 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/files/etc/ssl/certs/773242.pem --> /usr/share/ca-certificates/773242.pem (1708 bytes)
	I0717 15:18:41.893274   79953 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 15:18:41.916273   79953 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/certs/77324.pem --> /usr/share/ca-certificates/77324.pem (1338 bytes)
	I0717 15:18:41.938268   79953 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 15:18:41.955729   79953 ssh_runner.go:195] Run: openssl version
	I0717 15:18:41.961777   79953 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/77324.pem && ln -fs /usr/share/ca-certificates/77324.pem /etc/ssl/certs/77324.pem"
	I0717 15:18:41.971253   79953 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/77324.pem
	I0717 15:18:41.976249   79953 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jul 17 22:13 /usr/share/ca-certificates/77324.pem
	I0717 15:18:41.976307   79953 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/77324.pem
	I0717 15:18:41.983303   79953 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/77324.pem /etc/ssl/certs/51391683.0"
	I0717 15:18:41.992775   79953 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/773242.pem && ln -fs /usr/share/ca-certificates/773242.pem /etc/ssl/certs/773242.pem"
	I0717 15:18:42.003082   79953 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/773242.pem
	I0717 15:18:42.007503   79953 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jul 17 22:13 /usr/share/ca-certificates/773242.pem
	I0717 15:18:42.007570   79953 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/773242.pem
	I0717 15:18:42.014346   79953 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/773242.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 15:18:42.023715   79953 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 15:18:42.033545   79953 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 15:18:42.038452   79953 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jul 17 22:07 /usr/share/ca-certificates/minikubeCA.pem
	I0717 15:18:42.038506   79953 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 15:18:42.045428   79953 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 15:18:42.055157   79953 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0717 15:18:42.059653   79953 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0717 15:18:42.059700   79953 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-200000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-200000 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 15:18:42.059808   79953 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0717 15:18:42.078395   79953 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 15:18:42.087804   79953 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 15:18:42.096644   79953 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0717 15:18:42.096708   79953 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 15:18:42.106068   79953 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 15:18:42.106130   79953 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0717 15:18:42.158893   79953 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0717 15:18:42.158945   79953 kubeadm.go:322] [preflight] Running pre-flight checks
	I0717 15:18:42.409327   79953 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 15:18:42.409434   79953 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 15:18:42.409571   79953 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 15:18:42.589272   79953 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 15:18:42.589776   79953 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 15:18:42.589809   79953 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0717 15:18:42.664457   79953 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 15:18:42.687207   79953 out.go:204]   - Generating certificates and keys ...
	I0717 15:18:42.687318   79953 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0717 15:18:42.687423   79953 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0717 15:18:42.952112   79953 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0717 15:18:43.171211   79953 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0717 15:18:43.299864   79953 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0717 15:18:43.424533   79953 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0717 15:18:43.639930   79953 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0717 15:18:43.640050   79953 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-200000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0717 15:18:43.812920   79953 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0717 15:18:43.813079   79953 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-200000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0717 15:18:43.962040   79953 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0717 15:18:44.066575   79953 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0717 15:18:44.254387   79953 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0717 15:18:44.254444   79953 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 15:18:44.356460   79953 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 15:18:44.610415   79953 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 15:18:44.693561   79953 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 15:18:44.801745   79953 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 15:18:44.802389   79953 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 15:18:44.824234   79953 out.go:204]   - Booting up control plane ...
	I0717 15:18:44.824452   79953 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 15:18:44.824606   79953 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 15:18:44.824725   79953 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 15:18:44.824875   79953 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 15:18:44.825129   79953 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0717 15:19:24.814235   79953 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0717 15:19:24.815235   79953 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 15:19:24.815472   79953 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 15:19:29.816343   79953 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 15:19:29.816551   79953 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 15:19:39.818414   79953 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 15:19:39.818668   79953 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 15:19:59.821119   79953 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 15:19:59.821335   79953 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 15:20:39.824122   79953 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 15:20:39.824418   79953 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 15:20:39.824436   79953 kubeadm.go:322] 
	I0717 15:20:39.824509   79953 kubeadm.go:322] 	Unfortunately, an error has occurred:
	I0717 15:20:39.824565   79953 kubeadm.go:322] 		timed out waiting for the condition
	I0717 15:20:39.824571   79953 kubeadm.go:322] 
	I0717 15:20:39.824603   79953 kubeadm.go:322] 	This error is likely caused by:
	I0717 15:20:39.824641   79953 kubeadm.go:322] 		- The kubelet is not running
	I0717 15:20:39.824763   79953 kubeadm.go:322] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0717 15:20:39.824778   79953 kubeadm.go:322] 
	I0717 15:20:39.824898   79953 kubeadm.go:322] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0717 15:20:39.824942   79953 kubeadm.go:322] 		- 'systemctl status kubelet'
	I0717 15:20:39.824979   79953 kubeadm.go:322] 		- 'journalctl -xeu kubelet'
	I0717 15:20:39.824985   79953 kubeadm.go:322] 
	I0717 15:20:39.825124   79953 kubeadm.go:322] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0717 15:20:39.825228   79953 kubeadm.go:322] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0717 15:20:39.825243   79953 kubeadm.go:322] 
	I0717 15:20:39.825350   79953 kubeadm.go:322] 	Here is one example how you may list all Kubernetes containers running in docker:
	I0717 15:20:39.825426   79953 kubeadm.go:322] 		- 'docker ps -a | grep kube | grep -v pause'
	I0717 15:20:39.825530   79953 kubeadm.go:322] 		Once you have found the failing container, you can inspect its logs with:
	I0717 15:20:39.825568   79953 kubeadm.go:322] 		- 'docker logs CONTAINERID'
	I0717 15:20:39.825578   79953 kubeadm.go:322] 
	I0717 15:20:39.827735   79953 kubeadm.go:322] W0717 22:18:42.157619    1670 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0717 15:20:39.827890   79953 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0717 15:20:39.827952   79953 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0717 15:20:39.828057   79953 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.4. Latest validated version: 19.03
	I0717 15:20:39.828135   79953 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 15:20:39.828237   79953 kubeadm.go:322] W0717 22:18:44.807581    1670 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0717 15:20:39.828348   79953 kubeadm.go:322] W0717 22:18:44.808502    1670 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0717 15:20:39.828418   79953 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0717 15:20:39.828489   79953 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	W0717 15:20:39.828574   79953 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-200000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-200000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0717 22:18:42.157619    1670 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.4. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0717 22:18:44.807581    1670 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0717 22:18:44.808502    1670 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-200000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-200000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0717 22:18:42.157619    1670 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.4. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0717 22:18:44.807581    1670 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0717 22:18:44.808502    1670 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0717 15:20:39.828607   79953 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0717 15:20:40.244963   79953 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 15:20:40.255986   79953 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0717 15:20:40.256043   79953 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 15:20:40.264656   79953 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 15:20:40.264690   79953 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0717 15:20:40.314396   79953 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0717 15:20:40.314451   79953 kubeadm.go:322] [preflight] Running pre-flight checks
	I0717 15:20:40.557136   79953 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 15:20:40.557240   79953 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 15:20:40.557323   79953 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 15:20:40.733454   79953 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 15:20:40.733970   79953 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 15:20:40.734023   79953 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0717 15:20:40.811737   79953 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 15:20:40.833348   79953 out.go:204]   - Generating certificates and keys ...
	I0717 15:20:40.833451   79953 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0717 15:20:40.833547   79953 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0717 15:20:40.833631   79953 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0717 15:20:40.833709   79953 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0717 15:20:40.833822   79953 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0717 15:20:40.833880   79953 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0717 15:20:40.833959   79953 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0717 15:20:40.834070   79953 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0717 15:20:40.834167   79953 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0717 15:20:40.834233   79953 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0717 15:20:40.834267   79953 kubeadm.go:322] [certs] Using the existing "sa" key
	I0717 15:20:40.834328   79953 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 15:20:40.941760   79953 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 15:20:41.046494   79953 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 15:20:41.421199   79953 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 15:20:41.560824   79953 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 15:20:41.561275   79953 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 15:20:41.583040   79953 out.go:204]   - Booting up control plane ...
	I0717 15:20:41.583197   79953 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 15:20:41.583374   79953 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 15:20:41.583491   79953 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 15:20:41.583577   79953 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 15:20:41.583771   79953 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0717 15:21:21.572492   79953 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0717 15:21:21.573879   79953 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 15:21:21.574091   79953 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 15:21:26.576684   79953 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 15:21:26.576921   79953 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 15:21:36.578505   79953 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 15:21:36.578745   79953 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 15:21:56.580505   79953 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 15:21:56.580812   79953 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 15:22:36.582902   79953 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 15:22:36.583148   79953 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 15:22:36.583160   79953 kubeadm.go:322] 
	I0717 15:22:36.583208   79953 kubeadm.go:322] 	Unfortunately, an error has occurred:
	I0717 15:22:36.583253   79953 kubeadm.go:322] 		timed out waiting for the condition
	I0717 15:22:36.583259   79953 kubeadm.go:322] 
	I0717 15:22:36.583291   79953 kubeadm.go:322] 	This error is likely caused by:
	I0717 15:22:36.583340   79953 kubeadm.go:322] 		- The kubelet is not running
	I0717 15:22:36.583480   79953 kubeadm.go:322] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0717 15:22:36.583493   79953 kubeadm.go:322] 
	I0717 15:22:36.583607   79953 kubeadm.go:322] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0717 15:22:36.583644   79953 kubeadm.go:322] 		- 'systemctl status kubelet'
	I0717 15:22:36.583678   79953 kubeadm.go:322] 		- 'journalctl -xeu kubelet'
	I0717 15:22:36.583689   79953 kubeadm.go:322] 
	I0717 15:22:36.583818   79953 kubeadm.go:322] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0717 15:22:36.583935   79953 kubeadm.go:322] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0717 15:22:36.583955   79953 kubeadm.go:322] 
	I0717 15:22:36.584080   79953 kubeadm.go:322] 	Here is one example how you may list all Kubernetes containers running in docker:
	I0717 15:22:36.584141   79953 kubeadm.go:322] 		- 'docker ps -a | grep kube | grep -v pause'
	I0717 15:22:36.584237   79953 kubeadm.go:322] 		Once you have found the failing container, you can inspect its logs with:
	I0717 15:22:36.584282   79953 kubeadm.go:322] 		- 'docker logs CONTAINERID'
	I0717 15:22:36.584297   79953 kubeadm.go:322] 
	I0717 15:22:36.586886   79953 kubeadm.go:322] W0717 22:20:40.313020    4137 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0717 15:22:36.587077   79953 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0717 15:22:36.587140   79953 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0717 15:22:36.587257   79953 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.4. Latest validated version: 19.03
	I0717 15:22:36.587346   79953 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 15:22:36.587451   79953 kubeadm.go:322] W0717 22:20:41.565756    4137 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0717 15:22:36.587557   79953 kubeadm.go:322] W0717 22:20:41.566529    4137 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0717 15:22:36.587624   79953 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0717 15:22:36.587685   79953 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0717 15:22:36.587710   79953 kubeadm.go:406] StartCluster complete in 3m54.522552662s
	I0717 15:22:36.587817   79953 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 15:22:36.606806   79953 logs.go:284] 0 containers: []
	W0717 15:22:36.606818   79953 logs.go:286] No container was found matching "kube-apiserver"
	I0717 15:22:36.606888   79953 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 15:22:36.626587   79953 logs.go:284] 0 containers: []
	W0717 15:22:36.626600   79953 logs.go:286] No container was found matching "etcd"
	I0717 15:22:36.626684   79953 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 15:22:36.645961   79953 logs.go:284] 0 containers: []
	W0717 15:22:36.645979   79953 logs.go:286] No container was found matching "coredns"
	I0717 15:22:36.646054   79953 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 15:22:36.666019   79953 logs.go:284] 0 containers: []
	W0717 15:22:36.666037   79953 logs.go:286] No container was found matching "kube-scheduler"
	I0717 15:22:36.666104   79953 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 15:22:36.685983   79953 logs.go:284] 0 containers: []
	W0717 15:22:36.686012   79953 logs.go:286] No container was found matching "kube-proxy"
	I0717 15:22:36.686081   79953 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 15:22:36.704795   79953 logs.go:284] 0 containers: []
	W0717 15:22:36.704811   79953 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 15:22:36.704883   79953 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 15:22:36.725560   79953 logs.go:284] 0 containers: []
	W0717 15:22:36.725573   79953 logs.go:286] No container was found matching "kindnet"
	I0717 15:22:36.725584   79953 logs.go:123] Gathering logs for kubelet ...
	I0717 15:22:36.725592   79953 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 15:22:36.765962   79953 logs.go:123] Gathering logs for dmesg ...
	I0717 15:22:36.765984   79953 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 15:22:36.781839   79953 logs.go:123] Gathering logs for describe nodes ...
	I0717 15:22:36.781852   79953 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 15:22:36.840732   79953 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 15:22:36.840746   79953 logs.go:123] Gathering logs for Docker ...
	I0717 15:22:36.840756   79953 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 15:22:36.857136   79953 logs.go:123] Gathering logs for container status ...
	I0717 15:22:36.857149   79953 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0717 15:22:36.907535   79953 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0717 22:20:40.313020    4137 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.4. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0717 22:20:41.565756    4137 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0717 22:20:41.566529    4137 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0717 15:22:36.907557   79953 out.go:239] * 
	* 
	W0717 15:22:36.907598   79953 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0717 22:20:40.313020    4137 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.4. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0717 22:20:41.565756    4137 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0717 22:20:41.566529    4137 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0717 22:20:40.313020    4137 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.4. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0717 22:20:41.565756    4137 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0717 22:20:41.566529    4137 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0717 15:22:36.907626   79953 out.go:239] * 
	* 
	W0717 15:22:36.908304   79953 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 15:22:36.972219   79953 out.go:177] 
	W0717 15:22:37.035192   79953 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0717 22:20:40.313020    4137 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.4. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0717 22:20:41.565756    4137 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0717 22:20:41.566529    4137 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0717 22:20:40.313020    4137 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.4. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0717 22:20:41.565756    4137 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0717 22:20:41.566529    4137 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0717 15:22:37.035287   79953 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0717 15:22:37.035315   79953 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0717 15:22:37.077952   79953 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:41: failed to start minikube with args: "out/minikube-darwin-amd64 start -p ingress-addon-legacy-200000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker " : exit status 109
--- FAIL: TestIngressAddonLegacy/StartLegacyK8sCluster (266.96s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (81.95s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-200000 addons enable ingress --alsologtostderr -v=5
E0717 15:23:29.645744   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/functional-554000/client.crt: no such file or directory
ingress_addon_legacy_test.go:70: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ingress-addon-legacy-200000 addons enable ingress --alsologtostderr -v=5: exit status 10 (1m21.54283165s)

                                                
                                                
-- stdout --
	* ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	* After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	  - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	  - Using image registry.k8s.io/ingress-nginx/controller:v0.49.3
	  - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	* Verifying ingress addon...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 15:22:37.227035   80185 out.go:296] Setting OutFile to fd 1 ...
	I0717 15:22:37.227480   80185 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 15:22:37.227486   80185 out.go:309] Setting ErrFile to fd 2...
	I0717 15:22:37.227491   80185 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 15:22:37.227686   80185 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/16899-76867/.minikube/bin
	I0717 15:22:37.228328   80185 config.go:182] Loaded profile config "ingress-addon-legacy-200000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0717 15:22:37.228346   80185 addons.go:594] checking whether the cluster is paused
	I0717 15:22:37.228424   80185 config.go:182] Loaded profile config "ingress-addon-legacy-200000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0717 15:22:37.228444   80185 host.go:66] Checking if "ingress-addon-legacy-200000" exists ...
	I0717 15:22:37.228829   80185 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-200000 --format={{.State.Status}}
	I0717 15:22:37.278188   80185 ssh_runner.go:195] Run: systemctl --version
	I0717 15:22:37.278285   80185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-200000
	I0717 15:22:37.328635   80185 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53966 SSHKeyPath:/Users/jenkins/minikube-integration/16899-76867/.minikube/machines/ingress-addon-legacy-200000/id_rsa Username:docker}
	I0717 15:22:37.418755   80185 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0717 15:22:37.459722   80185 out.go:177] * ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	I0717 15:22:37.480729   80185 config.go:182] Loaded profile config "ingress-addon-legacy-200000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0717 15:22:37.480755   80185 addons.go:69] Setting ingress=true in profile "ingress-addon-legacy-200000"
	I0717 15:22:37.480798   80185 addons.go:231] Setting addon ingress=true in "ingress-addon-legacy-200000"
	I0717 15:22:37.480862   80185 host.go:66] Checking if "ingress-addon-legacy-200000" exists ...
	I0717 15:22:37.481460   80185 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-200000 --format={{.State.Status}}
	I0717 15:22:37.553346   80185 out.go:177] * After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	I0717 15:22:37.574600   80185 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I0717 15:22:37.595482   80185 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v0.49.3
	I0717 15:22:37.616280   80185 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I0717 15:22:37.637884   80185 addons.go:423] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0717 15:22:37.637909   80185 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (15618 bytes)
	I0717 15:22:37.638062   80185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-200000
	I0717 15:22:37.688361   80185 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53966 SSHKeyPath:/Users/jenkins/minikube-integration/16899-76867/.minikube/machines/ingress-addon-legacy-200000/id_rsa Username:docker}
	I0717 15:22:37.786981   80185 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0717 15:22:37.840753   80185 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0717 15:22:37.840778   80185 retry.go:31] will retry after 289.927366ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0717 15:22:38.132919   80185 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0717 15:22:38.187736   80185 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0717 15:22:38.187753   80185 retry.go:31] will retry after 551.309345ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0717 15:22:38.741384   80185 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0717 15:22:38.796645   80185 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0717 15:22:38.796662   80185 retry.go:31] will retry after 425.896991ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0717 15:22:39.224569   80185 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0717 15:22:39.282948   80185 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0717 15:22:39.282971   80185 retry.go:31] will retry after 918.722154ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0717 15:22:40.203966   80185 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0717 15:22:40.261719   80185 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0717 15:22:40.261739   80185 retry.go:31] will retry after 1.858190948s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0717 15:22:42.120537   80185 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0717 15:22:42.175628   80185 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0717 15:22:42.175646   80185 retry.go:31] will retry after 1.788516605s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0717 15:22:43.966501   80185 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0717 15:22:44.022542   80185 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0717 15:22:44.022561   80185 retry.go:31] will retry after 3.527450968s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0717 15:22:47.552410   80185 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0717 15:22:47.606696   80185 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0717 15:22:47.606714   80185 retry.go:31] will retry after 3.319551032s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0717 15:22:50.928602   80185 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0717 15:22:50.985190   80185 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0717 15:22:50.985207   80185 retry.go:31] will retry after 7.64074518s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0717 15:22:58.627752   80185 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0717 15:22:58.684191   80185 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0717 15:22:58.684211   80185 retry.go:31] will retry after 9.298768263s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0717 15:23:07.983738   80185 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0717 15:23:08.039805   80185 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0717 15:23:08.039822   80185 retry.go:31] will retry after 13.321609253s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0717 15:23:21.364023   80185 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0717 15:23:21.419125   80185 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0717 15:23:21.419151   80185 retry.go:31] will retry after 13.547841077s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0717 15:23:34.969689   80185 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0717 15:23:35.023850   80185 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0717 15:23:35.023879   80185 retry.go:31] will retry after 23.531739247s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0717 15:23:58.556504   80185 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0717 15:23:58.611543   80185 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0717 15:23:58.611572   80185 addons.go:467] Verifying addon ingress=true in "ingress-addon-legacy-200000"
	I0717 15:23:58.632978   80185 out.go:177] * Verifying ingress addon...
	I0717 15:23:58.656151   80185 out.go:177] 
	W0717 15:23:58.676824   80185 out.go:239] X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 get kube-client to validate ingress addon: client config: context "ingress-addon-legacy-200000" does not exist: client config: context "ingress-addon-legacy-200000" does not exist]
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 get kube-client to validate ingress addon: client config: context "ingress-addon-legacy-200000" does not exist: client config: context "ingress-addon-legacy-200000" does not exist]
	W0717 15:23:58.676845   80185 out.go:239] * 
	* 
	W0717 15:23:58.686024   80185 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_addons_ecab7b1157b569c129811d3c2b680fbca2a6f3d2_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_addons_ecab7b1157b569c129811d3c2b680fbca2a6f3d2_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 15:23:58.706809   80185 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:71: failed to enable ingress addon: exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddonActivation]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-200000
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-200000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "61ae6444c80136335081a960867be3b8dc87321ff0ce5588f279be8a26eeb7c1",
	        "Created": "2023-07-17T22:18:24.238767774Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 977443,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-07-17T22:18:24.438544287Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6cc01e6091959400f260dc442708e7c71630b58dab1f7c344cb00926bd84950",
	        "ResolvConfPath": "/var/lib/docker/containers/61ae6444c80136335081a960867be3b8dc87321ff0ce5588f279be8a26eeb7c1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/61ae6444c80136335081a960867be3b8dc87321ff0ce5588f279be8a26eeb7c1/hostname",
	        "HostsPath": "/var/lib/docker/containers/61ae6444c80136335081a960867be3b8dc87321ff0ce5588f279be8a26eeb7c1/hosts",
	        "LogPath": "/var/lib/docker/containers/61ae6444c80136335081a960867be3b8dc87321ff0ce5588f279be8a26eeb7c1/61ae6444c80136335081a960867be3b8dc87321ff0ce5588f279be8a26eeb7c1-json.log",
	        "Name": "/ingress-addon-legacy-200000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-200000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-200000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/79bb2f8338f41a367723a77176bb0fe5b5b6593ebc3acac2493c68b3ea60276e-init/diff:/var/lib/docker/overlay2/388817d1807139a2b5fe2987f16fc65d58f6720a0b0343097a59eb837a278a0e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/79bb2f8338f41a367723a77176bb0fe5b5b6593ebc3acac2493c68b3ea60276e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/79bb2f8338f41a367723a77176bb0fe5b5b6593ebc3acac2493c68b3ea60276e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/79bb2f8338f41a367723a77176bb0fe5b5b6593ebc3acac2493c68b3ea60276e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-200000",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-200000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-200000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-200000",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-200000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "dfd11ceaf69bec62a63e971dca674c11ad5eba99f2b1badece27003266fbf076",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53966"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53967"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53968"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53969"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53970"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/dfd11ceaf69b",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-200000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "61ae6444c801",
	                        "ingress-addon-legacy-200000"
	                    ],
	                    "NetworkID": "eed4dc933c672536e9bda3cec09cde5e7643db3b34efe92796c3b65e162e8498",
	                    "EndpointID": "6a9631feb147b3f0915b1aa797e51573e3d5061bb90391640fde2c1b9250e53c",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-200000 -n ingress-addon-legacy-200000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-200000 -n ingress-addon-legacy-200000: exit status 6 (357.107996ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 15:23:59.129299   80211 status.go:415] kubeconfig endpoint: extract IP: "ingress-addon-legacy-200000" does not appear in /Users/jenkins/minikube-integration/16899-76867/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-200000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (81.95s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (117.06s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-200000 addons enable ingress-dns --alsologtostderr -v=5
E0717 15:25:33.913127   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/addons-230000/client.crt: no such file or directory
E0717 15:25:45.792540   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/functional-554000/client.crt: no such file or directory
ingress_addon_legacy_test.go:79: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ingress-addon-legacy-200000 addons enable ingress-dns --alsologtostderr -v=5: exit status 10 (1m56.597926161s)

                                                
                                                
-- stdout --
	* ingress-dns is an addon maintained by minikube. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	* After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	  - Using image cryptexlabs/minikube-ingress-dns:0.3.0
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 15:23:59.181848   80221 out.go:296] Setting OutFile to fd 1 ...
	I0717 15:23:59.182027   80221 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 15:23:59.182032   80221 out.go:309] Setting ErrFile to fd 2...
	I0717 15:23:59.182036   80221 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 15:23:59.182217   80221 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/16899-76867/.minikube/bin
	I0717 15:23:59.182776   80221 config.go:182] Loaded profile config "ingress-addon-legacy-200000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0717 15:23:59.182793   80221 addons.go:594] checking whether the cluster is paused
	I0717 15:23:59.182870   80221 config.go:182] Loaded profile config "ingress-addon-legacy-200000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0717 15:23:59.182890   80221 host.go:66] Checking if "ingress-addon-legacy-200000" exists ...
	I0717 15:23:59.184235   80221 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-200000 --format={{.State.Status}}
	I0717 15:23:59.233711   80221 ssh_runner.go:195] Run: systemctl --version
	I0717 15:23:59.233826   80221 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-200000
	I0717 15:23:59.284622   80221 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53966 SSHKeyPath:/Users/jenkins/minikube-integration/16899-76867/.minikube/machines/ingress-addon-legacy-200000/id_rsa Username:docker}
	I0717 15:23:59.375145   80221 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0717 15:23:59.416884   80221 out.go:177] * ingress-dns is an addon maintained by minikube. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	I0717 15:23:59.437751   80221 config.go:182] Loaded profile config "ingress-addon-legacy-200000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0717 15:23:59.437781   80221 addons.go:69] Setting ingress-dns=true in profile "ingress-addon-legacy-200000"
	I0717 15:23:59.437793   80221 addons.go:231] Setting addon ingress-dns=true in "ingress-addon-legacy-200000"
	I0717 15:23:59.437850   80221 host.go:66] Checking if "ingress-addon-legacy-200000" exists ...
	I0717 15:23:59.438462   80221 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-200000 --format={{.State.Status}}
	I0717 15:23:59.510598   80221 out.go:177] * After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	I0717 15:23:59.531905   80221 out.go:177]   - Using image cryptexlabs/minikube-ingress-dns:0.3.0
	I0717 15:23:59.553900   80221 addons.go:423] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0717 15:23:59.553932   80221 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2434 bytes)
	I0717 15:23:59.554069   80221 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-200000
	I0717 15:23:59.605693   80221 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53966 SSHKeyPath:/Users/jenkins/minikube-integration/16899-76867/.minikube/machines/ingress-addon-legacy-200000/id_rsa Username:docker}
	I0717 15:23:59.706204   80221 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0717 15:23:59.759206   80221 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0717 15:23:59.759234   80221 retry.go:31] will retry after 316.059686ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0717 15:24:00.076314   80221 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0717 15:24:00.131386   80221 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0717 15:24:00.131412   80221 retry.go:31] will retry after 492.749723ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0717 15:24:00.626431   80221 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0717 15:24:00.684203   80221 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0717 15:24:00.684222   80221 retry.go:31] will retry after 373.384143ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0717 15:24:01.059866   80221 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0717 15:24:01.115830   80221 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0717 15:24:01.115852   80221 retry.go:31] will retry after 869.570871ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0717 15:24:01.986331   80221 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0717 15:24:02.042835   80221 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0717 15:24:02.042853   80221 retry.go:31] will retry after 797.046145ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0717 15:24:02.842221   80221 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0717 15:24:02.898756   80221 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0717 15:24:02.898773   80221 retry.go:31] will retry after 2.617347423s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0717 15:24:05.517922   80221 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0717 15:24:05.574338   80221 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0717 15:24:05.574361   80221 retry.go:31] will retry after 2.074417542s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0717 15:24:07.649836   80221 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0717 15:24:07.704900   80221 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0717 15:24:07.704919   80221 retry.go:31] will retry after 2.189281716s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0717 15:24:09.896393   80221 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0717 15:24:09.949509   80221 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0717 15:24:09.949530   80221 retry.go:31] will retry after 5.182089439s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0717 15:24:15.133275   80221 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0717 15:24:15.188353   80221 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0717 15:24:15.188373   80221 retry.go:31] will retry after 9.923928834s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0717 15:24:25.114852   80221 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0717 15:24:25.171356   80221 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0717 15:24:25.171372   80221 retry.go:31] will retry after 12.213437739s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0717 15:24:37.387391   80221 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0717 15:24:37.445328   80221 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0717 15:24:37.445346   80221 retry.go:31] will retry after 32.026080137s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0717 15:25:09.473790   80221 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0717 15:25:09.528682   80221 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0717 15:25:09.528699   80221 retry.go:31] will retry after 46.060126646s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0717 15:25:55.590065   80221 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0717 15:25:55.644071   80221 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0717 15:25:55.665904   80221 out.go:177] 
	W0717 15:25:55.688020   80221 out.go:239] X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply --force -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	W0717 15:25:55.688055   80221 out.go:239] * 
	* 
	W0717 15:25:55.696760   80221 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_addons_26091442b04c5e26589fdfa18b5031c2ff11dd6b_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_addons_26091442b04c5e26589fdfa18b5031c2ff11dd6b_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 15:25:55.717777   80221 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:80: failed to enable ingress-dns addon: exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-200000
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-200000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "61ae6444c80136335081a960867be3b8dc87321ff0ce5588f279be8a26eeb7c1",
	        "Created": "2023-07-17T22:18:24.238767774Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 977443,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-07-17T22:18:24.438544287Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6cc01e6091959400f260dc442708e7c71630b58dab1f7c344cb00926bd84950",
	        "ResolvConfPath": "/var/lib/docker/containers/61ae6444c80136335081a960867be3b8dc87321ff0ce5588f279be8a26eeb7c1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/61ae6444c80136335081a960867be3b8dc87321ff0ce5588f279be8a26eeb7c1/hostname",
	        "HostsPath": "/var/lib/docker/containers/61ae6444c80136335081a960867be3b8dc87321ff0ce5588f279be8a26eeb7c1/hosts",
	        "LogPath": "/var/lib/docker/containers/61ae6444c80136335081a960867be3b8dc87321ff0ce5588f279be8a26eeb7c1/61ae6444c80136335081a960867be3b8dc87321ff0ce5588f279be8a26eeb7c1-json.log",
	        "Name": "/ingress-addon-legacy-200000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-200000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-200000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/79bb2f8338f41a367723a77176bb0fe5b5b6593ebc3acac2493c68b3ea60276e-init/diff:/var/lib/docker/overlay2/388817d1807139a2b5fe2987f16fc65d58f6720a0b0343097a59eb837a278a0e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/79bb2f8338f41a367723a77176bb0fe5b5b6593ebc3acac2493c68b3ea60276e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/79bb2f8338f41a367723a77176bb0fe5b5b6593ebc3acac2493c68b3ea60276e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/79bb2f8338f41a367723a77176bb0fe5b5b6593ebc3acac2493c68b3ea60276e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-200000",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-200000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-200000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-200000",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-200000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "dfd11ceaf69bec62a63e971dca674c11ad5eba99f2b1badece27003266fbf076",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53966"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53967"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53968"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53969"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53970"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/dfd11ceaf69b",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-200000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "61ae6444c801",
	                        "ingress-addon-legacy-200000"
	                    ],
	                    "NetworkID": "eed4dc933c672536e9bda3cec09cde5e7643db3b34efe92796c3b65e162e8498",
	                    "EndpointID": "6a9631feb147b3f0915b1aa797e51573e3d5061bb90391640fde2c1b9250e53c",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-200000 -n ingress-addon-legacy-200000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-200000 -n ingress-addon-legacy-200000: exit status 6 (412.487115ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 15:25:56.196098   80239 status.go:415] kubeconfig endpoint: extract IP: "ingress-addon-legacy-200000" does not appear in /Users/jenkins/minikube-integration/16899-76867/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-200000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (117.06s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (0.41s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:177: failed to get Kubernetes client: <nil>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-200000
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-200000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "61ae6444c80136335081a960867be3b8dc87321ff0ce5588f279be8a26eeb7c1",
	        "Created": "2023-07-17T22:18:24.238767774Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 977443,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-07-17T22:18:24.438544287Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6cc01e6091959400f260dc442708e7c71630b58dab1f7c344cb00926bd84950",
	        "ResolvConfPath": "/var/lib/docker/containers/61ae6444c80136335081a960867be3b8dc87321ff0ce5588f279be8a26eeb7c1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/61ae6444c80136335081a960867be3b8dc87321ff0ce5588f279be8a26eeb7c1/hostname",
	        "HostsPath": "/var/lib/docker/containers/61ae6444c80136335081a960867be3b8dc87321ff0ce5588f279be8a26eeb7c1/hosts",
	        "LogPath": "/var/lib/docker/containers/61ae6444c80136335081a960867be3b8dc87321ff0ce5588f279be8a26eeb7c1/61ae6444c80136335081a960867be3b8dc87321ff0ce5588f279be8a26eeb7c1-json.log",
	        "Name": "/ingress-addon-legacy-200000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-200000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-200000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/79bb2f8338f41a367723a77176bb0fe5b5b6593ebc3acac2493c68b3ea60276e-init/diff:/var/lib/docker/overlay2/388817d1807139a2b5fe2987f16fc65d58f6720a0b0343097a59eb837a278a0e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/79bb2f8338f41a367723a77176bb0fe5b5b6593ebc3acac2493c68b3ea60276e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/79bb2f8338f41a367723a77176bb0fe5b5b6593ebc3acac2493c68b3ea60276e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/79bb2f8338f41a367723a77176bb0fe5b5b6593ebc3acac2493c68b3ea60276e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-200000",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-200000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-200000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-200000",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-200000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "dfd11ceaf69bec62a63e971dca674c11ad5eba99f2b1badece27003266fbf076",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53966"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53967"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53968"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53969"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "53970"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/dfd11ceaf69b",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-200000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "61ae6444c801",
	                        "ingress-addon-legacy-200000"
	                    ],
	                    "NetworkID": "eed4dc933c672536e9bda3cec09cde5e7643db3b34efe92796c3b65e162e8498",
	                    "EndpointID": "6a9631feb147b3f0915b1aa797e51573e3d5061bb90391640fde2c1b9250e53c",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-200000 -n ingress-addon-legacy-200000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-200000 -n ingress-addon-legacy-200000: exit status 6 (360.710024ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 15:25:56.608710   80251 status.go:415] kubeconfig endpoint: extract IP: "ingress-addon-legacy-200000" does not appear in /Users/jenkins/minikube-integration/16899-76867/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-200000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (0.41s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (63.6s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:132: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.0.2153951413.exe start -p running-upgrade-705000 --memory=2200 --vm-driver=docker 
version_upgrade_test.go:132: (dbg) Non-zero exit: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.0.2153951413.exe start -p running-upgrade-705000 --memory=2200 --vm-driver=docker : exit status 70 (48.227226797s)

                                                
                                                
-- stdout --
	! [running-upgrade-705000] minikube v1.9.0 on Darwin 13.4.1
	  - MINIKUBE_LOCATION=16899
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16899-76867/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	  - KUBECONFIG=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/legacy_kubeconfig1763101
	* Using the docker driver based on user configuration
	* Pulling base image ...
	* Downloading Kubernetes v1.18.0 preload ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	! StartHost failed, but will try again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-07-17 22:46:20.087352109 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* Deleting "running-upgrade-705000" in docker ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	* StartHost failed again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-07-17 22:46:34.376352933 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	  - Run: "minikube delete -p running-upgrade-705000", then "minikube start -p running-upgrade-705000 --alsologtostderr -v=1" to try again with more logging

                                                
                                                
-- /stdout --
** stderr ** 
	* minikube 1.30.1 is available! Download it: https://github.com/kubernetes/minikube/releases/tag/v1.30.1
	* To disable this notice, run: 'minikube config set WantUpdateNotification false'
	
	    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 181.67 KiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 1.89 MiB /    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 10.06 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 22.56 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 35.99 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 49.65 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 59.66 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 73.22 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 86.83 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 100.45 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 114.06 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 127.61 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
: 141.23 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 154.84 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 168.22 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 181.81 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 195.39 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 209.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 222.61 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 236.23 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 249.84 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 263.41 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 277.01 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 290.62 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 304.25 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.
lz4: 317.80 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 325.03 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 333.95 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 345.83 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 359.44 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 373.06 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 386.61 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 400.23 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 413.84 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 427.45 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 441.03 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 454.62 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 468.23 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.t
ar.lz4: 481.67 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 495.53 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 509.09 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 522.64 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 536.25 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 542.91 MiB* 
	X Unable to start VM after repeated tries. Please try {{'minikube delete' if possible: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-07-17 22:46:34.376352933 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:132: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.0.2153951413.exe start -p running-upgrade-705000 --memory=2200 --vm-driver=docker 
version_upgrade_test.go:132: (dbg) Non-zero exit: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.0.2153951413.exe start -p running-upgrade-705000 --memory=2200 --vm-driver=docker : exit status 70 (4.151487001s)

                                                
                                                
-- stdout --
	* [running-upgrade-705000] minikube v1.9.0 on Darwin 13.4.1
	  - MINIKUBE_LOCATION=16899
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16899-76867/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	  - KUBECONFIG=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/legacy_kubeconfig2934404020
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Updating the running docker "running-upgrade-705000" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:132: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.0.2153951413.exe start -p running-upgrade-705000 --memory=2200 --vm-driver=docker 
version_upgrade_test.go:132: (dbg) Non-zero exit: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.0.2153951413.exe start -p running-upgrade-705000 --memory=2200 --vm-driver=docker : exit status 70 (4.40604456s)

                                                
                                                
-- stdout --
	* [running-upgrade-705000] minikube v1.9.0 on Darwin 13.4.1
	  - MINIKUBE_LOCATION=16899
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16899-76867/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	  - KUBECONFIG=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/legacy_kubeconfig1617412385
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Updating the running docker "running-upgrade-705000" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:138: legacy v1.9.0 start failed: exit status 70
panic.go:522: *** TestRunningBinaryUpgrade FAILED at 2023-07-17 15:46:47.296957 -0700 PDT m=+2412.392527050
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestRunningBinaryUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect running-upgrade-705000
helpers_test.go:235: (dbg) docker inspect running-upgrade-705000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "10eb4e8ee2e64a8667d65b4f78f0d7262313447dd47df7ad91e10d20fe23164b",
	        "Created": "2023-07-17T22:46:28.152016918Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1112617,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-07-17T22:46:28.34729564Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:11589cdc9ef4b67a64cc243dd3cf013e81ad02bbed105fc37dc07aa272044680",
	        "ResolvConfPath": "/var/lib/docker/containers/10eb4e8ee2e64a8667d65b4f78f0d7262313447dd47df7ad91e10d20fe23164b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/10eb4e8ee2e64a8667d65b4f78f0d7262313447dd47df7ad91e10d20fe23164b/hostname",
	        "HostsPath": "/var/lib/docker/containers/10eb4e8ee2e64a8667d65b4f78f0d7262313447dd47df7ad91e10d20fe23164b/hosts",
	        "LogPath": "/var/lib/docker/containers/10eb4e8ee2e64a8667d65b4f78f0d7262313447dd47df7ad91e10d20fe23164b/10eb4e8ee2e64a8667d65b4f78f0d7262313447dd47df7ad91e10d20fe23164b-json.log",
	        "Name": "/running-upgrade-705000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "running-upgrade-705000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/e8602706f8b58dea7d834e8e5b3b1258e88c281a8bc1a4d597e38749e0fc5c5e-init/diff:/var/lib/docker/overlay2/6f79fcf1ae04c7315470ec130311770a1d5a1f09c9c016611ad483c8624a568c/diff:/var/lib/docker/overlay2/824d311c0f1a56d58a1b4de8d0d46c2c25458d301b90c38bb87793f004510773/diff:/var/lib/docker/overlay2/23f9e480bbe2b5e47902c250e9dcbf010afbdf61065cf22c306ba0406feb5016/diff:/var/lib/docker/overlay2/40a6b4863a53c49cb42b60a74dcc867d3b22055aeadb1acd6477fb476b42c8c4/diff:/var/lib/docker/overlay2/1e918fa9b271b3fa8d5cf84d6607ca2d421f8e27a238a1b09b7e989b0a0c9d6c/diff:/var/lib/docker/overlay2/43b32e81584664e728c478e05656242f76bc14c9092335f31f9c655d5d4b7d32/diff:/var/lib/docker/overlay2/50b96258598f5094ffbcb721c9226fc815cbb791d0a52acc146b9144fa132eb1/diff:/var/lib/docker/overlay2/9912d8aa578a55d3d854e85fb2c747ff454e142b9fba0446203653f2bfcfebf6/diff:/var/lib/docker/overlay2/fec59d5c2f0915ec67172bad6ff0580636c5cf30ac8f856fa52468d1e6e63eb8/diff:/var/lib/docker/overlay2/c64c0a
ca425c4e87fd598f1e176767c7587d40c04e1a418dd890e59476381def/diff:/var/lib/docker/overlay2/5b11f255860ccf7f8c12dcee584cdd6cf8749747563ca3d98dcb67a103f8876b/diff:/var/lib/docker/overlay2/f5e0502d23539f3d763856b84cc5929004b42c51b8ddcae1cc794c6e3f27cfd3/diff:/var/lib/docker/overlay2/f206036c73f93e71f2749ce2bdc2d5a05ae51031ad42fdd0851eb8b6305c95c0/diff:/var/lib/docker/overlay2/056325070bfcb7eab70071932a81d69bb8a78745bd783bf69c1f3aba45d8ad07/diff:/var/lib/docker/overlay2/506c189a7c5a2dd15dcb23866ea5b0de3f3cbfa45f8a5ed101b1da8cc01acd74/diff:/var/lib/docker/overlay2/a22f478f372890594a544a7667aff6bc1a4e11e024ffc62567c749235e429a49/diff:/var/lib/docker/overlay2/4d0b46e6475de6ab69177443c4e46a7d5285842f33cf8a1e08e77f234efc16b6/diff:/var/lib/docker/overlay2/21136419843b9dd031a7265c9796c123f4b7fc4a3eded9c5606126a076cd0c0e/diff:/var/lib/docker/overlay2/b4079f72b4fa546a22f2d285aa0df36e4efba9859314f5b77604b4d04b43cdcd/diff:/var/lib/docker/overlay2/b31b32e472f01811273fd8cc81dce6165b6336c168c1a0cb892f40cff012b826/diff:/var/lib/d
ocker/overlay2/829369828d47f4ae231abb0804d8da84c80120c46f306995bd9886cf4465aed0/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e8602706f8b58dea7d834e8e5b3b1258e88c281a8bc1a4d597e38749e0fc5c5e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e8602706f8b58dea7d834e8e5b3b1258e88c281a8bc1a4d597e38749e0fc5c5e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e8602706f8b58dea7d834e8e5b3b1258e88c281a8bc1a4d597e38749e0fc5c5e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "running-upgrade-705000",
	                "Source": "/var/lib/docker/volumes/running-upgrade-705000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "running-upgrade-705000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	                "container=docker"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "running-upgrade-705000",
	                "name.minikube.sigs.k8s.io": "running-upgrade-705000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a195e17fa795eaa3c3661cbdfcd1d951b788c938e1b0bd56f7b878b7cf3e718b",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55345"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55346"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55347"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/a195e17fa795",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "7d8563265e6e5cde350c7748595eda5e2281496515dcc07d8cd310c214d23c29",
	            "Gateway": "172.17.0.1",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "172.17.0.2",
	            "IPPrefixLen": 16,
	            "IPv6Gateway": "",
	            "MacAddress": "02:42:ac:11:00:02",
	            "Networks": {
	                "bridge": {
	                    "IPAMConfig": null,
	                    "Links": null,
	                    "Aliases": null,
	                    "NetworkID": "ef7f43e58377b53217a2c20dab25ef139178c7d4a4cc2ff02958959170ac9e32",
	                    "EndpointID": "7d8563265e6e5cde350c7748595eda5e2281496515dcc07d8cd310c214d23c29",
	                    "Gateway": "172.17.0.1",
	                    "IPAddress": "172.17.0.2",
	                    "IPPrefixLen": 16,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:ac:11:00:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p running-upgrade-705000 -n running-upgrade-705000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p running-upgrade-705000 -n running-upgrade-705000: exit status 6 (352.715026ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 15:46:47.690520   86097 status.go:415] kubeconfig endpoint: extract IP: "running-upgrade-705000" does not appear in /Users/jenkins/minikube-integration/16899-76867/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "running-upgrade-705000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "running-upgrade-705000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p running-upgrade-705000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p running-upgrade-705000: (2.176987938s)
--- FAIL: TestRunningBinaryUpgrade (63.60s)

                                                
                                    
x
+
TestKubernetesUpgrade (568.96s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:234: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-420000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker 
E0717 15:48:25.636591   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/skaffold-258000/client.crt: no such file or directory
E0717 15:48:25.642823   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/skaffold-258000/client.crt: no such file or directory
E0717 15:48:25.654141   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/skaffold-258000/client.crt: no such file or directory
E0717 15:48:25.674368   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/skaffold-258000/client.crt: no such file or directory
E0717 15:48:25.715546   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/skaffold-258000/client.crt: no such file or directory
E0717 15:48:25.797216   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/skaffold-258000/client.crt: no such file or directory
E0717 15:48:25.957830   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/skaffold-258000/client.crt: no such file or directory
E0717 15:48:26.317414   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/skaffold-258000/client.crt: no such file or directory
E0717 15:48:26.958228   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/skaffold-258000/client.crt: no such file or directory
E0717 15:48:28.238432   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/skaffold-258000/client.crt: no such file or directory
E0717 15:48:30.798663   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/skaffold-258000/client.crt: no such file or directory
E0717 15:48:35.918889   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/skaffold-258000/client.crt: no such file or directory
E0717 15:48:36.934399   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/addons-230000/client.crt: no such file or directory
version_upgrade_test.go:234: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kubernetes-upgrade-420000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker : exit status 109 (4m15.652797392s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-420000] minikube v1.31.0 on Darwin 13.4.1
	  - MINIKUBE_LOCATION=16899
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16899-76867/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16899-76867/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node kubernetes-upgrade-420000 in cluster kubernetes-upgrade-420000
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* Preparing Kubernetes v1.16.0 on Docker 24.0.4 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 15:47:40.900048   86453 out.go:296] Setting OutFile to fd 1 ...
	I0717 15:47:40.900253   86453 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 15:47:40.900258   86453 out.go:309] Setting ErrFile to fd 2...
	I0717 15:47:40.900262   86453 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 15:47:40.900445   86453 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/16899-76867/.minikube/bin
	I0717 15:47:40.901902   86453 out.go:303] Setting JSON to false
	I0717 15:47:40.921569   86453 start.go:128] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":24428,"bootTime":1689609632,"procs":439,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0717 15:47:40.921662   86453 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0717 15:47:40.943434   86453 out.go:177] * [kubernetes-upgrade-420000] minikube v1.31.0 on Darwin 13.4.1
	I0717 15:47:40.985901   86453 out.go:177]   - MINIKUBE_LOCATION=16899
	I0717 15:47:40.985926   86453 notify.go:220] Checking for updates...
	I0717 15:47:41.028111   86453 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16899-76867/kubeconfig
	I0717 15:47:41.049033   86453 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0717 15:47:41.070092   86453 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 15:47:41.091001   86453 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16899-76867/.minikube
	I0717 15:47:41.112179   86453 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 15:47:41.134342   86453 config.go:182] Loaded profile config "cert-expiration-996000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0717 15:47:41.134434   86453 driver.go:373] Setting default libvirt URI to qemu:///system
	I0717 15:47:41.189578   86453 docker.go:121] docker version: linux-24.0.2:Docker Desktop 4.21.1 (114176)
	I0717 15:47:41.189737   86453 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 15:47:41.293072   86453 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:70 OomKillDisable:false NGoroutines:68 SystemTime:2023-07-17 22:47:41.281541632 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:5.15.49-linuxkit-pr OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexS
erverAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=built
in name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Pat
h:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.6] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Command line tool for Docker Scout Vendor:Docker Inc. Version:0.16.1]] Warnings:<n
il>}}
	I0717 15:47:41.314319   86453 out.go:177] * Using the docker driver based on user configuration
	I0717 15:47:41.335400   86453 start.go:298] selected driver: docker
	I0717 15:47:41.335423   86453 start.go:880] validating driver "docker" against <nil>
	I0717 15:47:41.335440   86453 start.go:891] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 15:47:41.339482   86453 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 15:47:41.440246   86453 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:70 OomKillDisable:false NGoroutines:68 SystemTime:2023-07-17 22:47:41.428531094 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:5.15.49-linuxkit-pr OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexS
erverAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=built
in name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Pat
h:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.6] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Command line tool for Docker Scout Vendor:Docker Inc. Version:0.16.1]] Warnings:<n
il>}}
	I0717 15:47:41.440449   86453 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0717 15:47:41.440675   86453 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
	I0717 15:47:41.462324   86453 out.go:177] * Using Docker Desktop driver with root privileges
	I0717 15:47:41.483966   86453 cni.go:84] Creating CNI manager for ""
	I0717 15:47:41.484005   86453 cni.go:168] CNI unnecessary in this configuration, recommending no CNI
	I0717 15:47:41.484024   86453 start_flags.go:319] config:
	{Name:kubernetes-upgrade-420000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-420000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker C
RISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 15:47:41.506187   86453 out.go:177] * Starting control plane node kubernetes-upgrade-420000 in cluster kubernetes-upgrade-420000
	I0717 15:47:41.528168   86453 cache.go:122] Beginning downloading kic base image for docker with docker
	I0717 15:47:41.550048   86453 out.go:177] * Pulling base image ...
	I0717 15:47:41.592126   86453 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon
	I0717 15:47:41.592127   86453 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0717 15:47:41.592315   86453 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16899-76867/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0717 15:47:41.592337   86453 cache.go:57] Caching tarball of preloaded images
	I0717 15:47:41.592986   86453 preload.go:174] Found /Users/jenkins/minikube-integration/16899-76867/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0717 15:47:41.593203   86453 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0717 15:47:41.593658   86453 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/kubernetes-upgrade-420000/config.json ...
	I0717 15:47:41.593750   86453 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/kubernetes-upgrade-420000/config.json: {Name:mkbdf0f467bd5f1d5cc5211698325c4b9077670f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 15:47:41.643151   86453 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon, skipping pull
	I0717 15:47:41.643169   86453 cache.go:145] gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 exists in daemon, skipping load
	I0717 15:47:41.643186   86453 cache.go:195] Successfully downloaded all kic artifacts
	I0717 15:47:41.643246   86453 start.go:365] acquiring machines lock for kubernetes-upgrade-420000: {Name:mk1e5008cb98d3b7ced2f1e2a84da5090bcb3039 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 15:47:41.643404   86453 start.go:369] acquired machines lock for "kubernetes-upgrade-420000" in 144.422µs
	I0717 15:47:41.643429   86453 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-420000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-420000 Namespace:default APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: S
taticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0717 15:47:41.643510   86453 start.go:125] createHost starting for "" (driver="docker")
	I0717 15:47:41.686161   86453 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0717 15:47:41.686625   86453 start.go:159] libmachine.API.Create for "kubernetes-upgrade-420000" (driver="docker")
	I0717 15:47:41.686673   86453 client.go:168] LocalClient.Create starting
	I0717 15:47:41.686893   86453 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16899-76867/.minikube/certs/ca.pem
	I0717 15:47:41.686966   86453 main.go:141] libmachine: Decoding PEM data...
	I0717 15:47:41.687017   86453 main.go:141] libmachine: Parsing certificate...
	I0717 15:47:41.687124   86453 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16899-76867/.minikube/certs/cert.pem
	I0717 15:47:41.687183   86453 main.go:141] libmachine: Decoding PEM data...
	I0717 15:47:41.687202   86453 main.go:141] libmachine: Parsing certificate...
	I0717 15:47:41.688109   86453 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-420000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0717 15:47:41.737777   86453 cli_runner.go:211] docker network inspect kubernetes-upgrade-420000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0717 15:47:41.737882   86453 network_create.go:281] running [docker network inspect kubernetes-upgrade-420000] to gather additional debugging logs...
	I0717 15:47:41.737899   86453 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-420000
	W0717 15:47:41.787860   86453 cli_runner.go:211] docker network inspect kubernetes-upgrade-420000 returned with exit code 1
	I0717 15:47:41.787885   86453 network_create.go:284] error running [docker network inspect kubernetes-upgrade-420000]: docker network inspect kubernetes-upgrade-420000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network kubernetes-upgrade-420000 not found
	I0717 15:47:41.787903   86453 network_create.go:286] output of [docker network inspect kubernetes-upgrade-420000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network kubernetes-upgrade-420000 not found
	
	** /stderr **
	I0717 15:47:41.787986   86453 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0717 15:47:41.839511   86453 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0717 15:47:41.839873   86453 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000f205c0}
	I0717 15:47:41.839890   86453 network_create.go:123] attempt to create docker network kubernetes-upgrade-420000 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 65535 ...
	I0717 15:47:41.839963   86453 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-420000 kubernetes-upgrade-420000
	W0717 15:47:41.890669   86453 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-420000 kubernetes-upgrade-420000 returned with exit code 1
	W0717 15:47:41.890702   86453 network_create.go:148] failed to create docker network kubernetes-upgrade-420000 192.168.58.0/24 with gateway 192.168.58.1 and mtu of 65535: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-420000 kubernetes-upgrade-420000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0717 15:47:41.890722   86453 network_create.go:115] failed to create docker network kubernetes-upgrade-420000 192.168.58.0/24, will retry: subnet is taken
	I0717 15:47:41.892130   86453 network.go:212] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0717 15:47:41.892464   86453 network.go:209] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000e23f60}
	I0717 15:47:41.892480   86453 network_create.go:123] attempt to create docker network kubernetes-upgrade-420000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 65535 ...
	I0717 15:47:41.892547   86453 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-420000 kubernetes-upgrade-420000
	W0717 15:47:41.945470   86453 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-420000 kubernetes-upgrade-420000 returned with exit code 1
	W0717 15:47:41.945519   86453 network_create.go:148] failed to create docker network kubernetes-upgrade-420000 192.168.67.0/24 with gateway 192.168.67.1 and mtu of 65535: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-420000 kubernetes-upgrade-420000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0717 15:47:41.945536   86453 network_create.go:115] failed to create docker network kubernetes-upgrade-420000 192.168.67.0/24, will retry: subnet is taken
	I0717 15:47:41.946916   86453 network.go:212] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0717 15:47:41.947249   86453 network.go:209] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00059de00}
	I0717 15:47:41.947265   86453 network_create.go:123] attempt to create docker network kubernetes-upgrade-420000 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 65535 ...
	I0717 15:47:41.947335   86453 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-420000 kubernetes-upgrade-420000
	I0717 15:47:42.031034   86453 network_create.go:107] docker network kubernetes-upgrade-420000 192.168.76.0/24 created
	I0717 15:47:42.031073   86453 kic.go:117] calculated static IP "192.168.76.2" for the "kubernetes-upgrade-420000" container
	I0717 15:47:42.031194   86453 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0717 15:47:42.082693   86453 cli_runner.go:164] Run: docker volume create kubernetes-upgrade-420000 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-420000 --label created_by.minikube.sigs.k8s.io=true
	I0717 15:47:42.133858   86453 oci.go:103] Successfully created a docker volume kubernetes-upgrade-420000
	I0717 15:47:42.133979   86453 cli_runner.go:164] Run: docker run --rm --name kubernetes-upgrade-420000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-420000 --entrypoint /usr/bin/test -v kubernetes-upgrade-420000:/var gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -d /var/lib
	I0717 15:47:42.538206   86453 oci.go:107] Successfully prepared a docker volume kubernetes-upgrade-420000
	I0717 15:47:42.538245   86453 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0717 15:47:42.538262   86453 kic.go:190] Starting extracting preloaded images to volume ...
	I0717 15:47:42.538376   86453 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/16899-76867/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-420000:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -I lz4 -xf /preloaded.tar -C /extractDir
	I0717 15:47:45.015503   86453 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/16899-76867/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-420000:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -I lz4 -xf /preloaded.tar -C /extractDir: (2.477050059s)
	I0717 15:47:45.015532   86453 kic.go:199] duration metric: took 2.477255 seconds to extract preloaded images to volume
	I0717 15:47:45.015652   86453 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0717 15:47:45.117690   86453 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kubernetes-upgrade-420000 --name kubernetes-upgrade-420000 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-420000 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kubernetes-upgrade-420000 --network kubernetes-upgrade-420000 --ip 192.168.76.2 --volume kubernetes-upgrade-420000:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631
	I0717 15:47:45.376559   86453 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-420000 --format={{.State.Running}}
	I0717 15:47:45.431355   86453 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-420000 --format={{.State.Status}}
	I0717 15:47:45.485918   86453 cli_runner.go:164] Run: docker exec kubernetes-upgrade-420000 stat /var/lib/dpkg/alternatives/iptables
	I0717 15:47:45.578134   86453 oci.go:144] the created container "kubernetes-upgrade-420000" has a running status.
	I0717 15:47:45.578174   86453 kic.go:221] Creating ssh key for kic: /Users/jenkins/minikube-integration/16899-76867/.minikube/machines/kubernetes-upgrade-420000/id_rsa...
	I0717 15:47:45.637977   86453 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/16899-76867/.minikube/machines/kubernetes-upgrade-420000/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0717 15:47:45.706572   86453 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-420000 --format={{.State.Status}}
	I0717 15:47:45.764460   86453 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0717 15:47:45.764483   86453 kic_runner.go:114] Args: [docker exec --privileged kubernetes-upgrade-420000 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0717 15:47:45.862461   86453 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-420000 --format={{.State.Status}}
	I0717 15:47:45.996906   86453 machine.go:88] provisioning docker machine ...
	I0717 15:47:45.996945   86453 ubuntu.go:169] provisioning hostname "kubernetes-upgrade-420000"
	I0717 15:47:45.997047   86453 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-420000
	I0717 15:47:46.049274   86453 main.go:141] libmachine: Using SSH client type: native
	I0717 15:47:46.049692   86453 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140d2c0] 0x1410360 <nil>  [] 0s} 127.0.0.1 55458 <nil> <nil>}
	I0717 15:47:46.049711   86453 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-420000 && echo "kubernetes-upgrade-420000" | sudo tee /etc/hostname
	I0717 15:47:46.191228   86453 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-420000
	
	I0717 15:47:46.191316   86453 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-420000
	I0717 15:47:46.242153   86453 main.go:141] libmachine: Using SSH client type: native
	I0717 15:47:46.242522   86453 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140d2c0] 0x1410360 <nil>  [] 0s} 127.0.0.1 55458 <nil> <nil>}
	I0717 15:47:46.242537   86453 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-420000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-420000/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-420000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 15:47:46.370439   86453 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 15:47:46.370461   86453 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/16899-76867/.minikube CaCertPath:/Users/jenkins/minikube-integration/16899-76867/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/16899-76867/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/16899-76867/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/16899-76867/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/16899-76867/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/16899-76867/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/16899-76867/.minikube}
	I0717 15:47:46.370481   86453 ubuntu.go:177] setting up certificates
	I0717 15:47:46.370494   86453 provision.go:83] configureAuth start
	I0717 15:47:46.370563   86453 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-420000
	I0717 15:47:46.421342   86453 provision.go:138] copyHostCerts
	I0717 15:47:46.421447   86453 exec_runner.go:144] found /Users/jenkins/minikube-integration/16899-76867/.minikube/ca.pem, removing ...
	I0717 15:47:46.421457   86453 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/16899-76867/.minikube/ca.pem
	I0717 15:47:46.421568   86453 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16899-76867/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/16899-76867/.minikube/ca.pem (1078 bytes)
	I0717 15:47:46.421767   86453 exec_runner.go:144] found /Users/jenkins/minikube-integration/16899-76867/.minikube/cert.pem, removing ...
	I0717 15:47:46.421772   86453 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/16899-76867/.minikube/cert.pem
	I0717 15:47:46.421844   86453 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16899-76867/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/16899-76867/.minikube/cert.pem (1123 bytes)
	I0717 15:47:46.422001   86453 exec_runner.go:144] found /Users/jenkins/minikube-integration/16899-76867/.minikube/key.pem, removing ...
	I0717 15:47:46.422006   86453 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/16899-76867/.minikube/key.pem
	I0717 15:47:46.422071   86453 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16899-76867/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/16899-76867/.minikube/key.pem (1675 bytes)
	I0717 15:47:46.422201   86453 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/16899-76867/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/16899-76867/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/16899-76867/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-420000 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube kubernetes-upgrade-420000]
	I0717 15:47:46.605969   86453 provision.go:172] copyRemoteCerts
	I0717 15:47:46.606046   86453 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 15:47:46.606104   86453 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-420000
	I0717 15:47:46.658333   86453 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55458 SSHKeyPath:/Users/jenkins/minikube-integration/16899-76867/.minikube/machines/kubernetes-upgrade-420000/id_rsa Username:docker}
	I0717 15:47:46.754332   86453 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 15:47:46.775940   86453 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0717 15:47:46.796939   86453 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 15:47:46.818149   86453 provision.go:86] duration metric: configureAuth took 447.635589ms
	I0717 15:47:46.818165   86453 ubuntu.go:193] setting minikube options for container-runtime
	I0717 15:47:46.818302   86453 config.go:182] Loaded profile config "kubernetes-upgrade-420000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0717 15:47:46.818372   86453 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-420000
	I0717 15:47:46.870848   86453 main.go:141] libmachine: Using SSH client type: native
	I0717 15:47:46.871243   86453 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140d2c0] 0x1410360 <nil>  [] 0s} 127.0.0.1 55458 <nil> <nil>}
	I0717 15:47:46.871261   86453 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0717 15:47:46.996954   86453 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0717 15:47:46.996975   86453 ubuntu.go:71] root file system type: overlay
	I0717 15:47:46.997081   86453 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0717 15:47:46.997173   86453 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-420000
	I0717 15:47:47.048581   86453 main.go:141] libmachine: Using SSH client type: native
	I0717 15:47:47.048937   86453 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140d2c0] 0x1410360 <nil>  [] 0s} 127.0.0.1 55458 <nil> <nil>}
	I0717 15:47:47.048987   86453 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0717 15:47:47.187409   86453 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0717 15:47:47.187510   86453 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-420000
	I0717 15:47:47.239358   86453 main.go:141] libmachine: Using SSH client type: native
	I0717 15:47:47.239730   86453 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140d2c0] 0x1410360 <nil>  [] 0s} 127.0.0.1 55458 <nil> <nil>}
	I0717 15:47:47.239744   86453 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0717 15:47:47.909879   86453 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2023-07-07 14:50:55.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-07-17 22:47:47.185264672 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0717 15:47:47.909914   86453 machine.go:91] provisioned docker machine in 1.912961711s
	I0717 15:47:47.909921   86453 client.go:171] LocalClient.Create took 6.223204859s
	I0717 15:47:47.909942   86453 start.go:167] duration metric: libmachine.API.Create for "kubernetes-upgrade-420000" took 6.223283727s
	I0717 15:47:47.909951   86453 start.go:300] post-start starting for "kubernetes-upgrade-420000" (driver="docker")
	I0717 15:47:47.909960   86453 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 15:47:47.910014   86453 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 15:47:47.910134   86453 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-420000
	I0717 15:47:47.963394   86453 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55458 SSHKeyPath:/Users/jenkins/minikube-integration/16899-76867/.minikube/machines/kubernetes-upgrade-420000/id_rsa Username:docker}
	I0717 15:47:48.056257   86453 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 15:47:48.060457   86453 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0717 15:47:48.060479   86453 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0717 15:47:48.060487   86453 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0717 15:47:48.060491   86453 info.go:137] Remote host: Ubuntu 22.04.2 LTS
	I0717 15:47:48.060500   86453 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16899-76867/.minikube/addons for local assets ...
	I0717 15:47:48.060592   86453 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16899-76867/.minikube/files for local assets ...
	I0717 15:47:48.060769   86453 filesync.go:149] local asset: /Users/jenkins/minikube-integration/16899-76867/.minikube/files/etc/ssl/certs/773242.pem -> 773242.pem in /etc/ssl/certs
	I0717 15:47:48.060965   86453 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 15:47:48.069689   86453 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/files/etc/ssl/certs/773242.pem --> /etc/ssl/certs/773242.pem (1708 bytes)
	I0717 15:47:48.091643   86453 start.go:303] post-start completed in 181.681565ms
	I0717 15:47:48.092182   86453 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-420000
	I0717 15:47:48.146231   86453 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/kubernetes-upgrade-420000/config.json ...
	I0717 15:47:48.146710   86453 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 15:47:48.146773   86453 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-420000
	I0717 15:47:48.197678   86453 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55458 SSHKeyPath:/Users/jenkins/minikube-integration/16899-76867/.minikube/machines/kubernetes-upgrade-420000/id_rsa Username:docker}
	I0717 15:47:48.287945   86453 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0717 15:47:48.292962   86453 start.go:128] duration metric: createHost completed in 6.649404118s
	I0717 15:47:48.292982   86453 start.go:83] releasing machines lock for "kubernetes-upgrade-420000", held for 6.64953192s
	I0717 15:47:48.293064   86453 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-420000
	I0717 15:47:48.344944   86453 ssh_runner.go:195] Run: cat /version.json
	I0717 15:47:48.344968   86453 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 15:47:48.345026   86453 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-420000
	I0717 15:47:48.345052   86453 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-420000
	I0717 15:47:48.401322   86453 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55458 SSHKeyPath:/Users/jenkins/minikube-integration/16899-76867/.minikube/machines/kubernetes-upgrade-420000/id_rsa Username:docker}
	I0717 15:47:48.401440   86453 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55458 SSHKeyPath:/Users/jenkins/minikube-integration/16899-76867/.minikube/machines/kubernetes-upgrade-420000/id_rsa Username:docker}
	I0717 15:47:48.596029   86453 ssh_runner.go:195] Run: systemctl --version
	I0717 15:47:48.600987   86453 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0717 15:47:48.606355   86453 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0717 15:47:48.629204   86453 cni.go:236] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0717 15:47:48.629273   86453 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0717 15:47:48.645103   86453 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0717 15:47:48.661503   86453 cni.go:314] configured [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 15:47:48.661521   86453 start.go:466] detecting cgroup driver to use...
	I0717 15:47:48.661536   86453 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0717 15:47:48.661665   86453 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 15:47:48.678052   86453 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.1"|' /etc/containerd/config.toml"
	I0717 15:47:48.688451   86453 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0717 15:47:48.698821   86453 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0717 15:47:48.698883   86453 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0717 15:47:48.709117   86453 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0717 15:47:48.719317   86453 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0717 15:47:48.730470   86453 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0717 15:47:48.740643   86453 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 15:47:48.749829   86453 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0717 15:47:48.759885   86453 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 15:47:48.769075   86453 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 15:47:48.777571   86453 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 15:47:48.853707   86453 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0717 15:47:48.931668   86453 start.go:466] detecting cgroup driver to use...
	I0717 15:47:48.931687   86453 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0717 15:47:48.931761   86453 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0717 15:47:48.943929   86453 cruntime.go:276] skipping containerd shutdown because we are bound to it
	I0717 15:47:48.944002   86453 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0717 15:47:48.957189   86453 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 15:47:48.976489   86453 ssh_runner.go:195] Run: which cri-dockerd
	I0717 15:47:48.982046   86453 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0717 15:47:48.994112   86453 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0717 15:47:49.012021   86453 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0717 15:47:49.116287   86453 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0717 15:47:49.205382   86453 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0717 15:47:49.205405   86453 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0717 15:47:49.244282   86453 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 15:47:49.318977   86453 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0717 15:47:49.569483   86453 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0717 15:47:49.598076   86453 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0717 15:47:49.666019   86453 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 24.0.4 ...
	I0717 15:47:49.666194   86453 cli_runner.go:164] Run: docker exec -t kubernetes-upgrade-420000 dig +short host.docker.internal
	I0717 15:47:49.786472   86453 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I0717 15:47:49.786600   86453 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I0717 15:47:49.791992   86453 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 15:47:49.803147   86453 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-420000
	I0717 15:47:49.860393   86453 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0717 15:47:49.860474   86453 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0717 15:47:49.881463   86453 docker.go:636] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0717 15:47:49.881480   86453 docker.go:642] registry.k8s.io/kube-apiserver:v1.16.0 wasn't preloaded
	I0717 15:47:49.881539   86453 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0717 15:47:49.890976   86453 ssh_runner.go:195] Run: which lz4
	I0717 15:47:49.895627   86453 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0717 15:47:49.900116   86453 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 15:47:49.900141   86453 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (369789069 bytes)
	I0717 15:47:54.934554   86453 docker.go:600] Took 5.038972 seconds to copy over tarball
	I0717 15:47:54.934653   86453 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 15:47:57.036249   86453 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.101538339s)
	I0717 15:47:57.036263   86453 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 15:47:57.088691   86453 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0717 15:47:57.097693   86453 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2499 bytes)
	I0717 15:47:57.114564   86453 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 15:47:57.196322   86453 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0717 15:47:57.846306   86453 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0717 15:47:57.867879   86453 docker.go:636] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0717 15:47:57.867891   86453 docker.go:642] registry.k8s.io/kube-apiserver:v1.16.0 wasn't preloaded
	I0717 15:47:57.867900   86453 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0717 15:47:57.874534   86453 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0717 15:47:57.874538   86453 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0717 15:47:57.874559   86453 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0717 15:47:57.874565   86453 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0717 15:47:57.874544   86453 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0717 15:47:57.874689   86453 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0717 15:47:57.874534   86453 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 15:47:57.874822   86453 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0717 15:47:57.879525   86453 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0717 15:47:57.880715   86453 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0717 15:47:57.880865   86453 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0717 15:47:57.881289   86453 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0717 15:47:57.881404   86453 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 15:47:57.881526   86453 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0717 15:47:57.881651   86453 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0717 15:47:57.882842   86453 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0717 15:47:59.024707   86453 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0717 15:47:59.047073   86453 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0717 15:47:59.047112   86453 docker.go:316] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0717 15:47:59.047172   86453 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0717 15:47:59.067824   86453 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/16899-76867/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0717 15:47:59.205426   86453 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0717 15:47:59.226127   86453 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0717 15:47:59.226178   86453 docker.go:316] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0717 15:47:59.226243   86453 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.3.15-0
	I0717 15:47:59.247368   86453 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/16899-76867/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0717 15:47:59.414878   86453 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0717 15:47:59.436807   86453 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0717 15:47:59.436834   86453 docker.go:316] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0717 15:47:59.436896   86453 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.16.0
	I0717 15:47:59.459003   86453 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/16899-76867/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0717 15:47:59.470952   86453 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0717 15:47:59.493079   86453 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0717 15:47:59.493107   86453 docker.go:316] Removing image: registry.k8s.io/pause:3.1
	I0717 15:47:59.493164   86453 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.1
	I0717 15:47:59.515089   86453 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/16899-76867/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0717 15:48:00.035737   86453 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0717 15:48:00.059634   86453 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0717 15:48:00.059660   86453 docker.go:316] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0717 15:48:00.059721   86453 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0717 15:48:00.080496   86453 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/16899-76867/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0717 15:48:00.271800   86453 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 15:48:00.343335   86453 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0717 15:48:00.364311   86453 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0717 15:48:00.364340   86453 docker.go:316] Removing image: registry.k8s.io/coredns:1.6.2
	I0717 15:48:00.364412   86453 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.6.2
	I0717 15:48:00.384622   86453 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/16899-76867/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0717 15:48:00.641740   86453 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0717 15:48:00.662624   86453 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0717 15:48:00.662663   86453 docker.go:316] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0717 15:48:00.662731   86453 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0717 15:48:00.683281   86453 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/16899-76867/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0717 15:48:00.683336   86453 cache_images.go:92] LoadImages completed in 2.815407826s
	W0717 15:48:00.683393   86453 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/16899-76867/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0: no such file or directory
	X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/16899-76867/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0: no such file or directory
	I0717 15:48:00.683459   86453 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0717 15:48:00.738260   86453 cni.go:84] Creating CNI manager for ""
	I0717 15:48:00.738277   86453 cni.go:168] CNI unnecessary in this configuration, recommending no CNI
	I0717 15:48:00.738298   86453 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0717 15:48:00.738315   86453 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-420000 NodeName:kubernetes-upgrade-420000 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt S
taticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0717 15:48:00.738441   86453 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "kubernetes-upgrade-420000"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: kubernetes-upgrade-420000
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.76.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 15:48:00.738513   86453 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=kubernetes-upgrade-420000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-420000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0717 15:48:00.738583   86453 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0717 15:48:00.748241   86453 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 15:48:00.748380   86453 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 15:48:00.757908   86453 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (351 bytes)
	I0717 15:48:00.774434   86453 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 15:48:00.791298   86453 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2180 bytes)
	I0717 15:48:00.808809   86453 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0717 15:48:00.813184   86453 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 15:48:00.824332   86453 certs.go:56] Setting up /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/kubernetes-upgrade-420000 for IP: 192.168.76.2
	I0717 15:48:00.824349   86453 certs.go:190] acquiring lock for shared ca certs: {Name:mk8dc1f2afa352f9c2168154d4ab47beda1b6a37 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 15:48:00.824537   86453 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/16899-76867/.minikube/ca.key
	I0717 15:48:00.824603   86453 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/16899-76867/.minikube/proxy-client-ca.key
	I0717 15:48:00.824659   86453 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/kubernetes-upgrade-420000/client.key
	I0717 15:48:00.824683   86453 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/kubernetes-upgrade-420000/client.crt with IP's: []
	I0717 15:48:00.987160   86453 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/kubernetes-upgrade-420000/client.crt ...
	I0717 15:48:00.987182   86453 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/kubernetes-upgrade-420000/client.crt: {Name:mke68070a39c9e8b9a2b7a77c19f1bfb1943da99 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 15:48:00.987554   86453 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/kubernetes-upgrade-420000/client.key ...
	I0717 15:48:00.987563   86453 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/kubernetes-upgrade-420000/client.key: {Name:mk90ce01adf2c3c3a52bc95a522aec717f64597f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 15:48:00.987793   86453 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/kubernetes-upgrade-420000/apiserver.key.31bdca25
	I0717 15:48:00.987816   86453 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/kubernetes-upgrade-420000/apiserver.crt.31bdca25 with IP's: [192.168.76.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0717 15:48:01.143377   86453 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/kubernetes-upgrade-420000/apiserver.crt.31bdca25 ...
	I0717 15:48:01.143388   86453 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/kubernetes-upgrade-420000/apiserver.crt.31bdca25: {Name:mkf8f24209630b9e50ca1af12bc43d6e4f1efe25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 15:48:01.143663   86453 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/kubernetes-upgrade-420000/apiserver.key.31bdca25 ...
	I0717 15:48:01.143671   86453 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/kubernetes-upgrade-420000/apiserver.key.31bdca25: {Name:mk8e4a180d4c43bf12beb1c7735b862990323060 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 15:48:01.143857   86453 certs.go:337] copying /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/kubernetes-upgrade-420000/apiserver.crt.31bdca25 -> /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/kubernetes-upgrade-420000/apiserver.crt
	I0717 15:48:01.144041   86453 certs.go:341] copying /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/kubernetes-upgrade-420000/apiserver.key.31bdca25 -> /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/kubernetes-upgrade-420000/apiserver.key
	I0717 15:48:01.144185   86453 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/kubernetes-upgrade-420000/proxy-client.key
	I0717 15:48:01.144199   86453 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/kubernetes-upgrade-420000/proxy-client.crt with IP's: []
	I0717 15:48:01.188226   86453 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/kubernetes-upgrade-420000/proxy-client.crt ...
	I0717 15:48:01.188235   86453 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/kubernetes-upgrade-420000/proxy-client.crt: {Name:mk21d1c2e95178490475250667a9032580fbb726 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 15:48:01.188463   86453 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/kubernetes-upgrade-420000/proxy-client.key ...
	I0717 15:48:01.188472   86453 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/kubernetes-upgrade-420000/proxy-client.key: {Name:mk3a8fa8635388b43b7dab7018b9971550a99645 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 15:48:01.188874   86453 certs.go:437] found cert: /Users/jenkins/minikube-integration/16899-76867/.minikube/certs/Users/jenkins/minikube-integration/16899-76867/.minikube/certs/77324.pem (1338 bytes)
	W0717 15:48:01.188921   86453 certs.go:433] ignoring /Users/jenkins/minikube-integration/16899-76867/.minikube/certs/Users/jenkins/minikube-integration/16899-76867/.minikube/certs/77324_empty.pem, impossibly tiny 0 bytes
	I0717 15:48:01.188932   86453 certs.go:437] found cert: /Users/jenkins/minikube-integration/16899-76867/.minikube/certs/Users/jenkins/minikube-integration/16899-76867/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 15:48:01.188965   86453 certs.go:437] found cert: /Users/jenkins/minikube-integration/16899-76867/.minikube/certs/Users/jenkins/minikube-integration/16899-76867/.minikube/certs/ca.pem (1078 bytes)
	I0717 15:48:01.188994   86453 certs.go:437] found cert: /Users/jenkins/minikube-integration/16899-76867/.minikube/certs/Users/jenkins/minikube-integration/16899-76867/.minikube/certs/cert.pem (1123 bytes)
	I0717 15:48:01.189022   86453 certs.go:437] found cert: /Users/jenkins/minikube-integration/16899-76867/.minikube/certs/Users/jenkins/minikube-integration/16899-76867/.minikube/certs/key.pem (1675 bytes)
	I0717 15:48:01.189103   86453 certs.go:437] found cert: /Users/jenkins/minikube-integration/16899-76867/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/16899-76867/.minikube/files/etc/ssl/certs/773242.pem (1708 bytes)
	I0717 15:48:01.189603   86453 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/kubernetes-upgrade-420000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0717 15:48:01.213778   86453 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/kubernetes-upgrade-420000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 15:48:01.236527   86453 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/kubernetes-upgrade-420000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 15:48:01.258301   86453 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/kubernetes-upgrade-420000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 15:48:01.280285   86453 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 15:48:01.302889   86453 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 15:48:01.325356   86453 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 15:48:01.347821   86453 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0717 15:48:01.370065   86453 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/certs/77324.pem --> /usr/share/ca-certificates/77324.pem (1338 bytes)
	I0717 15:48:01.393149   86453 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/files/etc/ssl/certs/773242.pem --> /usr/share/ca-certificates/773242.pem (1708 bytes)
	I0717 15:48:01.415292   86453 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 15:48:01.437742   86453 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 15:48:01.454400   86453 ssh_runner.go:195] Run: openssl version
	I0717 15:48:01.460491   86453 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/77324.pem && ln -fs /usr/share/ca-certificates/77324.pem /etc/ssl/certs/77324.pem"
	I0717 15:48:01.470603   86453 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/77324.pem
	I0717 15:48:01.475270   86453 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jul 17 22:13 /usr/share/ca-certificates/77324.pem
	I0717 15:48:01.475331   86453 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/77324.pem
	I0717 15:48:01.482383   86453 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/77324.pem /etc/ssl/certs/51391683.0"
	I0717 15:48:01.491986   86453 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/773242.pem && ln -fs /usr/share/ca-certificates/773242.pem /etc/ssl/certs/773242.pem"
	I0717 15:48:01.502108   86453 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/773242.pem
	I0717 15:48:01.506529   86453 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jul 17 22:13 /usr/share/ca-certificates/773242.pem
	I0717 15:48:01.506576   86453 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/773242.pem
	I0717 15:48:01.513893   86453 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/773242.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 15:48:01.523582   86453 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 15:48:01.533415   86453 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 15:48:01.538021   86453 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jul 17 22:07 /usr/share/ca-certificates/minikubeCA.pem
	I0717 15:48:01.538071   86453 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 15:48:01.544970   86453 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 15:48:01.554946   86453 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0717 15:48:01.559283   86453 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0717 15:48:01.559331   86453 kubeadm.go:404] StartCluster: {Name:kubernetes-upgrade-420000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-420000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0}
	I0717 15:48:01.559435   86453 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0717 15:48:01.579238   86453 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 15:48:01.588659   86453 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 15:48:01.598188   86453 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0717 15:48:01.598253   86453 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 15:48:01.607125   86453 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 15:48:01.607156   86453 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0717 15:48:01.656335   86453 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0717 15:48:01.656753   86453 kubeadm.go:322] [preflight] Running pre-flight checks
	I0717 15:48:01.915875   86453 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 15:48:01.915977   86453 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 15:48:01.916084   86453 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 15:48:02.099877   86453 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 15:48:02.100663   86453 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 15:48:02.107225   86453 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0717 15:48:02.181774   86453 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 15:48:02.225078   86453 out.go:204]   - Generating certificates and keys ...
	I0717 15:48:02.225175   86453 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0717 15:48:02.225253   86453 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0717 15:48:02.287407   86453 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0717 15:48:02.432865   86453 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0717 15:48:02.635613   86453 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0717 15:48:02.847900   86453 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0717 15:48:02.963682   86453 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0717 15:48:02.963828   86453 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-420000 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0717 15:48:03.289077   86453 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0717 15:48:03.289191   86453 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-420000 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0717 15:48:03.462717   86453 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0717 15:48:03.685313   86453 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0717 15:48:03.830139   86453 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0717 15:48:03.830208   86453 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 15:48:03.888836   86453 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 15:48:03.957812   86453 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 15:48:04.005965   86453 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 15:48:04.123036   86453 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 15:48:04.123522   86453 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 15:48:04.143989   86453 out.go:204]   - Booting up control plane ...
	I0717 15:48:04.144180   86453 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 15:48:04.144406   86453 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 15:48:04.144558   86453 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 15:48:04.144744   86453 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 15:48:04.144978   86453 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0717 15:48:44.133586   86453 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0717 15:48:44.134467   86453 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 15:48:44.134691   86453 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 15:48:49.135271   86453 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 15:48:49.135524   86453 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 15:48:59.137495   86453 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 15:48:59.137725   86453 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 15:49:19.138107   86453 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 15:49:19.138270   86453 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 15:49:59.139840   86453 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 15:49:59.139987   86453 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 15:49:59.140009   86453 kubeadm.go:322] 
	I0717 15:49:59.140046   86453 kubeadm.go:322] Unfortunately, an error has occurred:
	I0717 15:49:59.140078   86453 kubeadm.go:322] 	timed out waiting for the condition
	I0717 15:49:59.140087   86453 kubeadm.go:322] 
	I0717 15:49:59.140190   86453 kubeadm.go:322] This error is likely caused by:
	I0717 15:49:59.140216   86453 kubeadm.go:322] 	- The kubelet is not running
	I0717 15:49:59.140356   86453 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0717 15:49:59.140368   86453 kubeadm.go:322] 
	I0717 15:49:59.140466   86453 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0717 15:49:59.140495   86453 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0717 15:49:59.140534   86453 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0717 15:49:59.140543   86453 kubeadm.go:322] 
	I0717 15:49:59.140690   86453 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0717 15:49:59.140819   86453 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0717 15:49:59.140969   86453 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0717 15:49:59.141036   86453 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0717 15:49:59.141097   86453 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0717 15:49:59.141127   86453 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0717 15:49:59.143146   86453 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0717 15:49:59.143221   86453 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0717 15:49:59.143365   86453 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.4. Latest validated version: 18.09
	I0717 15:49:59.143475   86453 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 15:49:59.143565   86453 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0717 15:49:59.143632   86453 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	W0717 15:49:59.143709   86453 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-420000 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-420000 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.4. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-420000 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-420000 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.4. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0717 15:49:59.143742   86453 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0717 15:49:59.565782   86453 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 15:49:59.578354   86453 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0717 15:49:59.578419   86453 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 15:49:59.587961   86453 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 15:49:59.588004   86453 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0717 15:49:59.659389   86453 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0717 15:49:59.659425   86453 kubeadm.go:322] [preflight] Running pre-flight checks
	I0717 15:49:59.914619   86453 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 15:49:59.914721   86453 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 15:49:59.914851   86453 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 15:50:00.096832   86453 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 15:50:00.097590   86453 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 15:50:00.104402   86453 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0717 15:50:00.174532   86453 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 15:50:00.218602   86453 out.go:204]   - Generating certificates and keys ...
	I0717 15:50:00.218684   86453 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0717 15:50:00.218749   86453 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0717 15:50:00.218824   86453 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0717 15:50:00.218872   86453 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0717 15:50:00.218947   86453 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0717 15:50:00.219002   86453 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0717 15:50:00.219069   86453 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0717 15:50:00.219116   86453 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0717 15:50:00.219178   86453 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0717 15:50:00.219261   86453 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0717 15:50:00.219325   86453 kubeadm.go:322] [certs] Using the existing "sa" key
	I0717 15:50:00.219422   86453 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 15:50:00.321672   86453 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 15:50:00.566441   86453 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 15:50:00.660994   86453 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 15:50:00.902793   86453 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 15:50:00.922949   86453 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 15:50:00.944314   86453 out.go:204]   - Booting up control plane ...
	I0717 15:50:00.944461   86453 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 15:50:00.944618   86453 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 15:50:00.944746   86453 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 15:50:00.944886   86453 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 15:50:00.945273   86453 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0717 15:50:40.912395   86453 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0717 15:50:40.932226   86453 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 15:50:40.932377   86453 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 15:50:45.914702   86453 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 15:50:45.923841   86453 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 15:50:55.916832   86453 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 15:50:55.925231   86453 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 15:51:15.918680   86453 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 15:51:15.930210   86453 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 15:51:55.919993   86453 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 15:51:55.926595   86453 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 15:51:55.926615   86453 kubeadm.go:322] 
	I0717 15:51:55.926653   86453 kubeadm.go:322] Unfortunately, an error has occurred:
	I0717 15:51:55.926724   86453 kubeadm.go:322] 	timed out waiting for the condition
	I0717 15:51:55.926755   86453 kubeadm.go:322] 
	I0717 15:51:55.926783   86453 kubeadm.go:322] This error is likely caused by:
	I0717 15:51:55.926815   86453 kubeadm.go:322] 	- The kubelet is not running
	I0717 15:51:55.927008   86453 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0717 15:51:55.927017   86453 kubeadm.go:322] 
	I0717 15:51:55.927155   86453 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0717 15:51:55.927184   86453 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0717 15:51:55.927258   86453 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0717 15:51:55.927268   86453 kubeadm.go:322] 
	I0717 15:51:55.927419   86453 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0717 15:51:55.927543   86453 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0717 15:51:55.927616   86453 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0717 15:51:55.927662   86453 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0717 15:51:55.927730   86453 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0717 15:51:55.927765   86453 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0717 15:51:55.927922   86453 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0717 15:51:55.927987   86453 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0717 15:51:55.928084   86453 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.4. Latest validated version: 18.09
	I0717 15:51:55.928176   86453 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 15:51:55.928243   86453 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0717 15:51:55.928302   86453 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0717 15:51:55.928329   86453 kubeadm.go:406] StartCluster complete in 3m54.367563398s
	I0717 15:51:55.928417   86453 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 15:51:55.948861   86453 logs.go:284] 0 containers: []
	W0717 15:51:55.948875   86453 logs.go:286] No container was found matching "kube-apiserver"
	I0717 15:51:55.948956   86453 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 15:51:55.970040   86453 logs.go:284] 0 containers: []
	W0717 15:51:55.970056   86453 logs.go:286] No container was found matching "etcd"
	I0717 15:51:55.970128   86453 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 15:51:55.992170   86453 logs.go:284] 0 containers: []
	W0717 15:51:55.992189   86453 logs.go:286] No container was found matching "coredns"
	I0717 15:51:55.992276   86453 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 15:51:56.016375   86453 logs.go:284] 0 containers: []
	W0717 15:51:56.016391   86453 logs.go:286] No container was found matching "kube-scheduler"
	I0717 15:51:56.016469   86453 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 15:51:56.038054   86453 logs.go:284] 0 containers: []
	W0717 15:51:56.038067   86453 logs.go:286] No container was found matching "kube-proxy"
	I0717 15:51:56.038137   86453 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 15:51:56.058444   86453 logs.go:284] 0 containers: []
	W0717 15:51:56.058461   86453 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 15:51:56.058535   86453 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 15:51:56.079662   86453 logs.go:284] 0 containers: []
	W0717 15:51:56.079679   86453 logs.go:286] No container was found matching "kindnet"
	I0717 15:51:56.079687   86453 logs.go:123] Gathering logs for describe nodes ...
	I0717 15:51:56.079696   86453 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 15:51:56.150008   86453 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 15:51:56.150028   86453 logs.go:123] Gathering logs for Docker ...
	I0717 15:51:56.150040   86453 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 15:51:56.171216   86453 logs.go:123] Gathering logs for container status ...
	I0717 15:51:56.171233   86453 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 15:51:56.230228   86453 logs.go:123] Gathering logs for kubelet ...
	I0717 15:51:56.230244   86453 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 15:51:56.276673   86453 logs.go:123] Gathering logs for dmesg ...
	I0717 15:51:56.276695   86453 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0717 15:51:56.297397   86453 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.4. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0717 15:51:56.297423   86453 out.go:239] * 
	* 
	W0717 15:51:56.297470   86453 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.4. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.4. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0717 15:51:56.297490   86453 out.go:239] * 
	* 
	W0717 15:51:56.298246   86453 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 15:51:56.361604   86453 out.go:177] 
	W0717 15:51:56.403412   86453 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.4. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.4. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0717 15:51:56.403469   86453 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0717 15:51:56.403486   86453 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0717 15:51:56.446472   86453 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:236: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-amd64 start -p kubernetes-upgrade-420000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker : exit status 109
version_upgrade_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 stop -p kubernetes-upgrade-420000
version_upgrade_test.go:239: (dbg) Done: out/minikube-darwin-amd64 stop -p kubernetes-upgrade-420000: (1.562285118s)
version_upgrade_test.go:244: (dbg) Run:  out/minikube-darwin-amd64 -p kubernetes-upgrade-420000 status --format={{.Host}}
version_upgrade_test.go:244: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p kubernetes-upgrade-420000 status --format={{.Host}}: exit status 7 (97.233981ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:246: status error: exit status 7 (may be ok)
version_upgrade_test.go:255: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-420000 --memory=2200 --kubernetes-version=v1.27.3 --alsologtostderr -v=1 --driver=docker 
version_upgrade_test.go:255: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-420000 --memory=2200 --kubernetes-version=v1.27.3 --alsologtostderr -v=1 --driver=docker : (4m36.451489556s)
version_upgrade_test.go:260: (dbg) Run:  kubectl --context kubernetes-upgrade-420000 version --output=json
version_upgrade_test.go:279: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-420000 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker 
version_upgrade_test.go:281: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kubernetes-upgrade-420000 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker : exit status 106 (409.864641ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-420000] minikube v1.31.0 on Darwin 13.4.1
	  - MINIKUBE_LOCATION=16899
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16899-76867/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16899-76867/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.27.3 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-420000
	    minikube start -p kubernetes-upgrade-420000 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-4200002 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.27.3, by running:
	    
	    minikube start -p kubernetes-upgrade-420000 --kubernetes-version=v1.27.3
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:285: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:287: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-420000 --memory=2200 --kubernetes-version=v1.27.3 --alsologtostderr -v=1 --driver=docker 
version_upgrade_test.go:287: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-420000 --memory=2200 --kubernetes-version=v1.27.3 --alsologtostderr -v=1 --driver=docker : (28.530285343s)
version_upgrade_test.go:291: *** TestKubernetesUpgrade FAILED at 2023-07-17 15:57:03.669743 -0700 PDT m=+3028.761676781
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestKubernetesUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect kubernetes-upgrade-420000
helpers_test.go:235: (dbg) docker inspect kubernetes-upgrade-420000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b80ccbd9dfc3d18f629668b45e4f340d413410a25f12a13ab5651a52016b9457",
	        "Created": "2023-07-17T22:47:45.167705351Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1140106,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-07-17T22:51:59.789221709Z",
	            "FinishedAt": "2023-07-17T22:51:57.004452015Z"
	        },
	        "Image": "sha256:c6cc01e6091959400f260dc442708e7c71630b58dab1f7c344cb00926bd84950",
	        "ResolvConfPath": "/var/lib/docker/containers/b80ccbd9dfc3d18f629668b45e4f340d413410a25f12a13ab5651a52016b9457/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b80ccbd9dfc3d18f629668b45e4f340d413410a25f12a13ab5651a52016b9457/hostname",
	        "HostsPath": "/var/lib/docker/containers/b80ccbd9dfc3d18f629668b45e4f340d413410a25f12a13ab5651a52016b9457/hosts",
	        "LogPath": "/var/lib/docker/containers/b80ccbd9dfc3d18f629668b45e4f340d413410a25f12a13ab5651a52016b9457/b80ccbd9dfc3d18f629668b45e4f340d413410a25f12a13ab5651a52016b9457-json.log",
	        "Name": "/kubernetes-upgrade-420000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "kubernetes-upgrade-420000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "kubernetes-upgrade-420000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/04c47173b48e1b1e0fb02dbf1f3cd7b681d929b18f614fd1adc49d8f79bb9b80-init/diff:/var/lib/docker/overlay2/388817d1807139a2b5fe2987f16fc65d58f6720a0b0343097a59eb837a278a0e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/04c47173b48e1b1e0fb02dbf1f3cd7b681d929b18f614fd1adc49d8f79bb9b80/merged",
	                "UpperDir": "/var/lib/docker/overlay2/04c47173b48e1b1e0fb02dbf1f3cd7b681d929b18f614fd1adc49d8f79bb9b80/diff",
	                "WorkDir": "/var/lib/docker/overlay2/04c47173b48e1b1e0fb02dbf1f3cd7b681d929b18f614fd1adc49d8f79bb9b80/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "kubernetes-upgrade-420000",
	                "Source": "/var/lib/docker/volumes/kubernetes-upgrade-420000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "kubernetes-upgrade-420000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "kubernetes-upgrade-420000",
	                "name.minikube.sigs.k8s.io": "kubernetes-upgrade-420000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8b80d4250d8a75f7945d23b4987cf6b89ea1c4890b33dc8398fd8cc903a07e1c",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55673"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55674"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55675"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55676"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55677"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/8b80d4250d8a",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "kubernetes-upgrade-420000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "b80ccbd9dfc3",
	                        "kubernetes-upgrade-420000"
	                    ],
	                    "NetworkID": "9fad1b8e915cd9c1db4a4c6cf74b5f33ad50090dac7ab4820558e43ad9782797",
	                    "EndpointID": "d2ab9b8077c0e8b8979b8704be1ec5021a272ba761ce128b0af1fc0db4c23b20",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p kubernetes-upgrade-420000 -n kubernetes-upgrade-420000
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p kubernetes-upgrade-420000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p kubernetes-upgrade-420000 logs -n 25: (2.380539497s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p flannel-679000 sudo cat                           | flannel-679000            | jenkins | v1.31.0 | 17 Jul 23 15:55 PDT | 17 Jul 23 15:55 PDT |
	|         | /etc/kubernetes/kubelet.conf                         |                           |         |         |                     |                     |
	| ssh     | -p flannel-679000 sudo cat                           | flannel-679000            | jenkins | v1.31.0 | 17 Jul 23 15:55 PDT | 17 Jul 23 15:55 PDT |
	|         | /var/lib/kubelet/config.yaml                         |                           |         |         |                     |                     |
	| ssh     | -p flannel-679000 sudo                               | flannel-679000            | jenkins | v1.31.0 | 17 Jul 23 15:55 PDT | 17 Jul 23 15:55 PDT |
	|         | systemctl status docker --all                        |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p flannel-679000 sudo                               | flannel-679000            | jenkins | v1.31.0 | 17 Jul 23 15:55 PDT | 17 Jul 23 15:55 PDT |
	|         | systemctl cat docker                                 |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p flannel-679000 sudo cat                           | flannel-679000            | jenkins | v1.31.0 | 17 Jul 23 15:55 PDT | 17 Jul 23 15:56 PDT |
	|         | /etc/docker/daemon.json                              |                           |         |         |                     |                     |
	| ssh     | -p flannel-679000 sudo docker                        | flannel-679000            | jenkins | v1.31.0 | 17 Jul 23 15:56 PDT | 17 Jul 23 15:56 PDT |
	|         | system info                                          |                           |         |         |                     |                     |
	| ssh     | -p flannel-679000 sudo                               | flannel-679000            | jenkins | v1.31.0 | 17 Jul 23 15:56 PDT | 17 Jul 23 15:56 PDT |
	|         | systemctl status cri-docker                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p flannel-679000 sudo                               | flannel-679000            | jenkins | v1.31.0 | 17 Jul 23 15:56 PDT | 17 Jul 23 15:56 PDT |
	|         | systemctl cat cri-docker                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p flannel-679000 sudo cat                           | flannel-679000            | jenkins | v1.31.0 | 17 Jul 23 15:56 PDT | 17 Jul 23 15:56 PDT |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p flannel-679000 sudo cat                           | flannel-679000            | jenkins | v1.31.0 | 17 Jul 23 15:56 PDT | 17 Jul 23 15:56 PDT |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p flannel-679000 sudo                               | flannel-679000            | jenkins | v1.31.0 | 17 Jul 23 15:56 PDT | 17 Jul 23 15:56 PDT |
	|         | cri-dockerd --version                                |                           |         |         |                     |                     |
	| ssh     | -p flannel-679000 sudo                               | flannel-679000            | jenkins | v1.31.0 | 17 Jul 23 15:56 PDT | 17 Jul 23 15:56 PDT |
	|         | systemctl status containerd                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p flannel-679000 sudo                               | flannel-679000            | jenkins | v1.31.0 | 17 Jul 23 15:56 PDT | 17 Jul 23 15:56 PDT |
	|         | systemctl cat containerd                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p flannel-679000 sudo cat                           | flannel-679000            | jenkins | v1.31.0 | 17 Jul 23 15:56 PDT | 17 Jul 23 15:56 PDT |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p flannel-679000 sudo cat                           | flannel-679000            | jenkins | v1.31.0 | 17 Jul 23 15:56 PDT | 17 Jul 23 15:56 PDT |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p flannel-679000 sudo                               | flannel-679000            | jenkins | v1.31.0 | 17 Jul 23 15:56 PDT | 17 Jul 23 15:56 PDT |
	|         | containerd config dump                               |                           |         |         |                     |                     |
	| ssh     | -p flannel-679000 sudo                               | flannel-679000            | jenkins | v1.31.0 | 17 Jul 23 15:56 PDT |                     |
	|         | systemctl status crio --all                          |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p flannel-679000 sudo                               | flannel-679000            | jenkins | v1.31.0 | 17 Jul 23 15:56 PDT | 17 Jul 23 15:56 PDT |
	|         | systemctl cat crio --no-pager                        |                           |         |         |                     |                     |
	| ssh     | -p flannel-679000 sudo find                          | flannel-679000            | jenkins | v1.31.0 | 17 Jul 23 15:56 PDT | 17 Jul 23 15:56 PDT |
	|         | /etc/crio -type f -exec sh -c                        |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p flannel-679000 sudo crio                          | flannel-679000            | jenkins | v1.31.0 | 17 Jul 23 15:56 PDT | 17 Jul 23 15:56 PDT |
	|         | config                                               |                           |         |         |                     |                     |
	| delete  | -p flannel-679000                                    | flannel-679000            | jenkins | v1.31.0 | 17 Jul 23 15:56 PDT | 17 Jul 23 15:56 PDT |
	| start   | -p enable-default-cni-679000                         | enable-default-cni-679000 | jenkins | v1.31.0 | 17 Jul 23 15:56 PDT | 17 Jul 23 15:56 PDT |
	|         | --memory=3072                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                                   |                           |         |         |                     |                     |
	|         | --enable-default-cni=true                            |                           |         |         |                     |                     |
	|         | --driver=docker                                      |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-420000                         | kubernetes-upgrade-420000 | jenkins | v1.31.0 | 17 Jul 23 15:56 PDT |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                         |                           |         |         |                     |                     |
	|         | --driver=docker                                      |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-420000                         | kubernetes-upgrade-420000 | jenkins | v1.31.0 | 17 Jul 23 15:56 PDT | 17 Jul 23 15:57 PDT |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                               |                           |         |         |                     |                     |
	|         | --driver=docker                                      |                           |         |         |                     |                     |
	| ssh     | -p enable-default-cni-679000                         | enable-default-cni-679000 | jenkins | v1.31.0 | 17 Jul 23 15:56 PDT | 17 Jul 23 15:56 PDT |
	|         | pgrep -a kubelet                                     |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/07/17 15:56:35
	Running on machine: MacOS-Agent-1
	Binary: Built with gc go1.20.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 15:56:35.178250   89231 out.go:296] Setting OutFile to fd 1 ...
	I0717 15:56:35.178425   89231 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 15:56:35.178430   89231 out.go:309] Setting ErrFile to fd 2...
	I0717 15:56:35.178434   89231 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 15:56:35.178617   89231 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/16899-76867/.minikube/bin
	I0717 15:56:35.180006   89231 out.go:303] Setting JSON to false
	I0717 15:56:35.200351   89231 start.go:128] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":24963,"bootTime":1689609632,"procs":440,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0717 15:56:35.200444   89231 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0717 15:56:35.222183   89231 out.go:177] * [kubernetes-upgrade-420000] minikube v1.31.0 on Darwin 13.4.1
	I0717 15:56:35.264082   89231 notify.go:220] Checking for updates...
	I0717 15:56:35.264101   89231 out.go:177]   - MINIKUBE_LOCATION=16899
	I0717 15:56:35.285159   89231 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16899-76867/kubeconfig
	I0717 15:56:35.306161   89231 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0717 15:56:35.327068   89231 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 15:56:35.348137   89231 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16899-76867/.minikube
	I0717 15:56:35.369113   89231 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 15:56:35.390441   89231 config.go:182] Loaded profile config "kubernetes-upgrade-420000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0717 15:56:35.390986   89231 driver.go:373] Setting default libvirt URI to qemu:///system
	I0717 15:56:35.447004   89231 docker.go:121] docker version: linux-24.0.2:Docker Desktop 4.21.1 (114176)
	I0717 15:56:35.447155   89231 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 15:56:35.553833   89231 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:73 OomKillDisable:false NGoroutines:73 SystemTime:2023-07-17 22:56:35.541288991 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:5.15.49-linuxkit-pr OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexS
erverAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=built
in name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Pat
h:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.6] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Command line tool for Docker Scout Vendor:Docker Inc. Version:0.16.1]] Warnings:<n
il>}}
	I0717 15:56:35.596303   89231 out.go:177] * Using the docker driver based on existing profile
	I0717 15:56:35.617151   89231 start.go:298] selected driver: docker
	I0717 15:56:35.617169   89231 start.go:880] validating driver "docker" against &{Name:kubernetes-upgrade-420000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:kubernetes-upgrade-420000 Namespace:default APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custo
mQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 15:56:35.617296   89231 start.go:891] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 15:56:35.621126   89231 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 15:56:35.728870   89231 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:73 OomKillDisable:false NGoroutines:73 SystemTime:2023-07-17 22:56:35.716154936 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:5.15.49-linuxkit-pr OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexS
erverAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=built
in name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Pat
h:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.6] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Command line tool for Docker Scout Vendor:Docker Inc. Version:0.16.1]] Warnings:<n
il>}}
	I0717 15:56:35.729112   89231 cni.go:84] Creating CNI manager for ""
	I0717 15:56:35.729130   89231 cni.go:149] "docker" driver + "docker" runtime found, recommending kindnet
	I0717 15:56:35.729143   89231 start_flags.go:319] config:
	{Name:kubernetes-upgrade-420000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:kubernetes-upgrade-420000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker C
RISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock
: SSHAgentPID:0}
	I0717 15:56:35.751089   89231 out.go:177] * Starting control plane node kubernetes-upgrade-420000 in cluster kubernetes-upgrade-420000
	I0717 15:56:35.771803   89231 cache.go:122] Beginning downloading kic base image for docker with docker
	I0717 15:56:35.792507   89231 out.go:177] * Pulling base image ...
	I0717 15:56:35.813665   89231 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0717 15:56:35.813692   89231 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon
	I0717 15:56:35.813729   89231 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16899-76867/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-amd64.tar.lz4
	I0717 15:56:35.813746   89231 cache.go:57] Caching tarball of preloaded images
	I0717 15:56:35.813879   89231 preload.go:174] Found /Users/jenkins/minikube-integration/16899-76867/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0717 15:56:35.813891   89231 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0717 15:56:35.814493   89231 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/kubernetes-upgrade-420000/config.json ...
	I0717 15:56:35.866466   89231 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon, skipping pull
	I0717 15:56:35.866485   89231 cache.go:145] gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 exists in daemon, skipping load
	I0717 15:56:35.866506   89231 cache.go:195] Successfully downloaded all kic artifacts
	I0717 15:56:35.866567   89231 start.go:365] acquiring machines lock for kubernetes-upgrade-420000: {Name:mk1e5008cb98d3b7ced2f1e2a84da5090bcb3039 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 15:56:35.866672   89231 start.go:369] acquired machines lock for "kubernetes-upgrade-420000" in 83.43µs
	I0717 15:56:35.866702   89231 start.go:96] Skipping create...Using existing machine configuration
	I0717 15:56:35.866710   89231 fix.go:54] fixHost starting: 
	I0717 15:56:35.866940   89231 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-420000 --format={{.State.Status}}
	I0717 15:56:35.917996   89231 fix.go:102] recreateIfNeeded on kubernetes-upgrade-420000: state=Running err=<nil>
	W0717 15:56:35.918043   89231 fix.go:128] unexpected machine state, will restart: <nil>
	I0717 15:56:35.939819   89231 out.go:177] * Updating the running docker "kubernetes-upgrade-420000" container ...
	I0717 15:56:34.197682   89092 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 15:56:34.697489   89092 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 15:56:35.197847   89092 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 15:56:35.698152   89092 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 15:56:36.196953   89092 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 15:56:36.696907   89092 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 15:56:37.196982   89092 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 15:56:37.698076   89092 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 15:56:38.197512   89092 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 15:56:38.697848   89092 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 15:56:35.981459   89231 machine.go:88] provisioning docker machine ...
	I0717 15:56:35.981517   89231 ubuntu.go:169] provisioning hostname "kubernetes-upgrade-420000"
	I0717 15:56:35.981665   89231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-420000
	I0717 15:56:36.036611   89231 main.go:141] libmachine: Using SSH client type: native
	I0717 15:56:36.037037   89231 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140d2c0] 0x1410360 <nil>  [] 0s} 127.0.0.1 55673 <nil> <nil>}
	I0717 15:56:36.037049   89231 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-420000 && echo "kubernetes-upgrade-420000" | sudo tee /etc/hostname
	I0717 15:56:36.179696   89231 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-420000
	
	I0717 15:56:36.179793   89231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-420000
	I0717 15:56:36.234932   89231 main.go:141] libmachine: Using SSH client type: native
	I0717 15:56:36.235413   89231 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140d2c0] 0x1410360 <nil>  [] 0s} 127.0.0.1 55673 <nil> <nil>}
	I0717 15:56:36.235435   89231 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-420000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-420000/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-420000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 15:56:36.366125   89231 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 15:56:36.366148   89231 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/16899-76867/.minikube CaCertPath:/Users/jenkins/minikube-integration/16899-76867/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/16899-76867/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/16899-76867/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/16899-76867/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/16899-76867/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/16899-76867/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/16899-76867/.minikube}
	I0717 15:56:36.366169   89231 ubuntu.go:177] setting up certificates
	I0717 15:56:36.366184   89231 provision.go:83] configureAuth start
	I0717 15:56:36.366284   89231 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-420000
	I0717 15:56:36.420680   89231 provision.go:138] copyHostCerts
	I0717 15:56:36.420770   89231 exec_runner.go:144] found /Users/jenkins/minikube-integration/16899-76867/.minikube/ca.pem, removing ...
	I0717 15:56:36.420780   89231 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/16899-76867/.minikube/ca.pem
	I0717 15:56:36.420862   89231 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16899-76867/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/16899-76867/.minikube/ca.pem (1078 bytes)
	I0717 15:56:36.421068   89231 exec_runner.go:144] found /Users/jenkins/minikube-integration/16899-76867/.minikube/cert.pem, removing ...
	I0717 15:56:36.421075   89231 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/16899-76867/.minikube/cert.pem
	I0717 15:56:36.421136   89231 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16899-76867/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/16899-76867/.minikube/cert.pem (1123 bytes)
	I0717 15:56:36.421302   89231 exec_runner.go:144] found /Users/jenkins/minikube-integration/16899-76867/.minikube/key.pem, removing ...
	I0717 15:56:36.421307   89231 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/16899-76867/.minikube/key.pem
	I0717 15:56:36.421384   89231 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16899-76867/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/16899-76867/.minikube/key.pem (1675 bytes)
	I0717 15:56:36.421511   89231 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/16899-76867/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/16899-76867/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/16899-76867/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-420000 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube kubernetes-upgrade-420000]
	I0717 15:56:36.536627   89231 provision.go:172] copyRemoteCerts
	I0717 15:56:36.536700   89231 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 15:56:36.536795   89231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-420000
	I0717 15:56:36.589042   89231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55673 SSHKeyPath:/Users/jenkins/minikube-integration/16899-76867/.minikube/machines/kubernetes-upgrade-420000/id_rsa Username:docker}
	I0717 15:56:36.682727   89231 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 15:56:36.705289   89231 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0717 15:56:36.730067   89231 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0717 15:56:36.755559   89231 provision.go:86] duration metric: configureAuth took 389.330121ms
	I0717 15:56:36.755581   89231 ubuntu.go:193] setting minikube options for container-runtime
	I0717 15:56:36.755769   89231 config.go:182] Loaded profile config "kubernetes-upgrade-420000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0717 15:56:36.755840   89231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-420000
	I0717 15:56:36.809408   89231 main.go:141] libmachine: Using SSH client type: native
	I0717 15:56:36.809766   89231 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140d2c0] 0x1410360 <nil>  [] 0s} 127.0.0.1 55673 <nil> <nil>}
	I0717 15:56:36.809777   89231 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0717 15:56:36.938434   89231 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0717 15:56:36.938446   89231 ubuntu.go:71] root file system type: overlay
	I0717 15:56:36.938532   89231 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0717 15:56:36.938625   89231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-420000
	I0717 15:56:36.991016   89231 main.go:141] libmachine: Using SSH client type: native
	I0717 15:56:36.991374   89231 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140d2c0] 0x1410360 <nil>  [] 0s} 127.0.0.1 55673 <nil> <nil>}
	I0717 15:56:36.991432   89231 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0717 15:56:37.130723   89231 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0717 15:56:37.130823   89231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-420000
	I0717 15:56:37.182829   89231 main.go:141] libmachine: Using SSH client type: native
	I0717 15:56:37.183198   89231 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140d2c0] 0x1410360 <nil>  [] 0s} 127.0.0.1 55673 <nil> <nil>}
	I0717 15:56:37.183212   89231 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0717 15:56:37.318189   89231 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 15:56:37.318204   89231 machine.go:91] provisioned docker machine in 1.336720873s
	I0717 15:56:37.318215   89231 start.go:300] post-start starting for "kubernetes-upgrade-420000" (driver="docker")
	I0717 15:56:37.318225   89231 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 15:56:37.318292   89231 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 15:56:37.318364   89231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-420000
	I0717 15:56:37.370371   89231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55673 SSHKeyPath:/Users/jenkins/minikube-integration/16899-76867/.minikube/machines/kubernetes-upgrade-420000/id_rsa Username:docker}
	I0717 15:56:37.462861   89231 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 15:56:37.467749   89231 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0717 15:56:37.467778   89231 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0717 15:56:37.467789   89231 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0717 15:56:37.467794   89231 info.go:137] Remote host: Ubuntu 22.04.2 LTS
	I0717 15:56:37.467804   89231 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16899-76867/.minikube/addons for local assets ...
	I0717 15:56:37.467902   89231 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16899-76867/.minikube/files for local assets ...
	I0717 15:56:37.468064   89231 filesync.go:149] local asset: /Users/jenkins/minikube-integration/16899-76867/.minikube/files/etc/ssl/certs/773242.pem -> 773242.pem in /etc/ssl/certs
	I0717 15:56:37.468230   89231 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 15:56:37.478849   89231 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/files/etc/ssl/certs/773242.pem --> /etc/ssl/certs/773242.pem (1708 bytes)
	I0717 15:56:37.501226   89231 start.go:303] post-start completed in 183.001462ms
	I0717 15:56:37.501308   89231 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 15:56:37.501380   89231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-420000
	I0717 15:56:37.553381   89231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55673 SSHKeyPath:/Users/jenkins/minikube-integration/16899-76867/.minikube/machines/kubernetes-upgrade-420000/id_rsa Username:docker}
	I0717 15:56:37.649853   89231 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0717 15:56:37.657171   89231 fix.go:56] fixHost completed within 1.790447595s
	I0717 15:56:37.657191   89231 start.go:83] releasing machines lock for "kubernetes-upgrade-420000", held for 1.790500066s
	I0717 15:56:37.657299   89231 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-420000
	I0717 15:56:37.721445   89231 ssh_runner.go:195] Run: cat /version.json
	I0717 15:56:37.721533   89231 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 15:56:37.721568   89231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-420000
	I0717 15:56:37.721709   89231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-420000
	I0717 15:56:37.785630   89231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55673 SSHKeyPath:/Users/jenkins/minikube-integration/16899-76867/.minikube/machines/kubernetes-upgrade-420000/id_rsa Username:docker}
	I0717 15:56:37.787414   89231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55673 SSHKeyPath:/Users/jenkins/minikube-integration/16899-76867/.minikube/machines/kubernetes-upgrade-420000/id_rsa Username:docker}
	I0717 15:56:37.877624   89231 ssh_runner.go:195] Run: systemctl --version
	I0717 15:56:37.986312   89231 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 15:56:37.992469   89231 cni.go:215] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 15:56:37.992528   89231 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0717 15:56:38.001723   89231 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0717 15:56:38.011290   89231 cni.go:311] no active bridge cni configs found in "/etc/cni/net.d" - nothing to configure
	I0717 15:56:38.011305   89231 start.go:466] detecting cgroup driver to use...
	I0717 15:56:38.011319   89231 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0717 15:56:38.011464   89231 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 15:56:38.027945   89231 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0717 15:56:38.038160   89231 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0717 15:56:38.049396   89231 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0717 15:56:38.049467   89231 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0717 15:56:38.061200   89231 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0717 15:56:38.072861   89231 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0717 15:56:38.083208   89231 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0717 15:56:38.093272   89231 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 15:56:38.103117   89231 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0717 15:56:38.113178   89231 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 15:56:38.122744   89231 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 15:56:38.131323   89231 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 15:56:38.218710   89231 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0717 15:56:39.197230   89092 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 15:56:39.697955   89092 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 15:56:40.198934   89092 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 15:56:40.699053   89092 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 15:56:41.197585   89092 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 15:56:41.698147   89092 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 15:56:42.197293   89092 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 15:56:42.697027   89092 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0717 15:56:42.768235   89092 kubeadm.go:1081] duration metric: took 11.792491636s to wait for elevateKubeSystemPrivileges.
	I0717 15:56:42.768260   89092 kubeadm.go:406] StartCluster complete in 22.351237046s
	I0717 15:56:42.768289   89092 settings.go:142] acquiring lock: {Name:mkcd1c9566f766bc2df0b9039d6e9d173f23ce3a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 15:56:42.768388   89092 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/16899-76867/kubeconfig
	I0717 15:56:42.769109   89092 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16899-76867/kubeconfig: {Name:mk7ebdcff64e7ccd84e22cec95bc3c8ecbf54564 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 15:56:42.769364   89092 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0717 15:56:42.769379   89092 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0717 15:56:42.769438   89092 addons.go:69] Setting storage-provisioner=true in profile "enable-default-cni-679000"
	I0717 15:56:42.769458   89092 addons.go:69] Setting default-storageclass=true in profile "enable-default-cni-679000"
	I0717 15:56:42.769465   89092 addons.go:231] Setting addon storage-provisioner=true in "enable-default-cni-679000"
	I0717 15:56:42.769474   89092 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "enable-default-cni-679000"
	I0717 15:56:42.769515   89092 host.go:66] Checking if "enable-default-cni-679000" exists ...
	I0717 15:56:42.769535   89092 config.go:182] Loaded profile config "enable-default-cni-679000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0717 15:56:42.769790   89092 cli_runner.go:164] Run: docker container inspect enable-default-cni-679000 --format={{.State.Status}}
	I0717 15:56:42.769932   89092 cli_runner.go:164] Run: docker container inspect enable-default-cni-679000 --format={{.State.Status}}
	I0717 15:56:42.871656   89092 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 15:56:42.892792   89092 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 15:56:42.892817   89092 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 15:56:42.892943   89092 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-679000
	I0717 15:56:42.895966   89092 addons.go:231] Setting addon default-storageclass=true in "enable-default-cni-679000"
	I0717 15:56:42.896013   89092 host.go:66] Checking if "enable-default-cni-679000" exists ...
	I0717 15:56:42.896370   89092 cli_runner.go:164] Run: docker container inspect enable-default-cni-679000 --format={{.State.Status}}
	I0717 15:56:42.899219   89092 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.254 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0717 15:56:42.961686   89092 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56156 SSHKeyPath:/Users/jenkins/minikube-integration/16899-76867/.minikube/machines/enable-default-cni-679000/id_rsa Username:docker}
	I0717 15:56:42.961720   89092 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 15:56:42.961730   89092 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 15:56:42.961809   89092 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-679000
	I0717 15:56:43.021162   89092 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56156 SSHKeyPath:/Users/jenkins/minikube-integration/16899-76867/.minikube/machines/enable-default-cni-679000/id_rsa Username:docker}
	I0717 15:56:43.167506   89092 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 15:56:43.354364   89092 kapi.go:248] "coredns" deployment in "kube-system" namespace and "enable-default-cni-679000" context rescaled to 1 replicas
	I0717 15:56:43.354390   89092 start.go:223] Will wait 15m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0717 15:56:43.397916   89092 out.go:177] * Verifying Kubernetes components...
	I0717 15:56:43.419203   89092 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 15:56:43.422256   89092 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 15:56:44.274535   89092 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.254 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.375258696s)
	I0717 15:56:44.274558   89092 start.go:901] {"host.minikube.internal": 192.168.65.254} host record injected into CoreDNS's ConfigMap
	I0717 15:56:44.752855   89092 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.33361745s)
	I0717 15:56:44.752856   89092 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.585300672s)
	I0717 15:56:44.752940   89092 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.330651953s)
	I0717 15:56:44.752986   89092 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" enable-default-cni-679000
	I0717 15:56:44.778885   89092 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0717 15:56:44.835640   89092 addons.go:502] enable addons completed in 2.066201985s: enabled=[storage-provisioner default-storageclass]
	I0717 15:56:44.847492   89092 node_ready.go:35] waiting up to 15m0s for node "enable-default-cni-679000" to be "Ready" ...
	I0717 15:56:44.852282   89092 node_ready.go:49] node "enable-default-cni-679000" has status "Ready":"True"
	I0717 15:56:44.852300   89092 node_ready.go:38] duration metric: took 4.78375ms waiting for node "enable-default-cni-679000" to be "Ready" ...
	I0717 15:56:44.852327   89092 pod_ready.go:35] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 15:56:44.862432   89092 pod_ready.go:78] waiting up to 15m0s for pod "coredns-5d78c9869d-d6sjd" in "kube-system" namespace to be "Ready" ...
	I0717 15:56:45.877557   89092 pod_ready.go:92] pod "coredns-5d78c9869d-d6sjd" in "kube-system" namespace has status "Ready":"True"
	I0717 15:56:45.877567   89092 pod_ready.go:81] duration metric: took 1.015112737s waiting for pod "coredns-5d78c9869d-d6sjd" in "kube-system" namespace to be "Ready" ...
	I0717 15:56:45.877574   89092 pod_ready.go:78] waiting up to 15m0s for pod "coredns-5d78c9869d-v4zjq" in "kube-system" namespace to be "Ready" ...
	I0717 15:56:45.882717   89092 pod_ready.go:92] pod "coredns-5d78c9869d-v4zjq" in "kube-system" namespace has status "Ready":"True"
	I0717 15:56:45.882726   89092 pod_ready.go:81] duration metric: took 5.147634ms waiting for pod "coredns-5d78c9869d-v4zjq" in "kube-system" namespace to be "Ready" ...
	I0717 15:56:45.882732   89092 pod_ready.go:78] waiting up to 15m0s for pod "etcd-enable-default-cni-679000" in "kube-system" namespace to be "Ready" ...
	I0717 15:56:45.887846   89092 pod_ready.go:92] pod "etcd-enable-default-cni-679000" in "kube-system" namespace has status "Ready":"True"
	I0717 15:56:45.887855   89092 pod_ready.go:81] duration metric: took 5.118793ms waiting for pod "etcd-enable-default-cni-679000" in "kube-system" namespace to be "Ready" ...
	I0717 15:56:45.887861   89092 pod_ready.go:78] waiting up to 15m0s for pod "kube-apiserver-enable-default-cni-679000" in "kube-system" namespace to be "Ready" ...
	I0717 15:56:45.893892   89092 pod_ready.go:92] pod "kube-apiserver-enable-default-cni-679000" in "kube-system" namespace has status "Ready":"True"
	I0717 15:56:45.893902   89092 pod_ready.go:81] duration metric: took 6.03639ms waiting for pod "kube-apiserver-enable-default-cni-679000" in "kube-system" namespace to be "Ready" ...
	I0717 15:56:45.893908   89092 pod_ready.go:78] waiting up to 15m0s for pod "kube-controller-manager-enable-default-cni-679000" in "kube-system" namespace to be "Ready" ...
	I0717 15:56:46.054536   89092 pod_ready.go:92] pod "kube-controller-manager-enable-default-cni-679000" in "kube-system" namespace has status "Ready":"True"
	I0717 15:56:46.054547   89092 pod_ready.go:81] duration metric: took 160.633487ms waiting for pod "kube-controller-manager-enable-default-cni-679000" in "kube-system" namespace to be "Ready" ...
	I0717 15:56:46.054556   89092 pod_ready.go:78] waiting up to 15m0s for pod "kube-proxy-nphd8" in "kube-system" namespace to be "Ready" ...
	I0717 15:56:46.453871   89092 pod_ready.go:92] pod "kube-proxy-nphd8" in "kube-system" namespace has status "Ready":"True"
	I0717 15:56:46.453883   89092 pod_ready.go:81] duration metric: took 399.319945ms waiting for pod "kube-proxy-nphd8" in "kube-system" namespace to be "Ready" ...
	I0717 15:56:46.453892   89092 pod_ready.go:78] waiting up to 15m0s for pod "kube-scheduler-enable-default-cni-679000" in "kube-system" namespace to be "Ready" ...
	I0717 15:56:46.854926   89092 pod_ready.go:92] pod "kube-scheduler-enable-default-cni-679000" in "kube-system" namespace has status "Ready":"True"
	I0717 15:56:46.854937   89092 pod_ready.go:81] duration metric: took 401.037589ms waiting for pod "kube-scheduler-enable-default-cni-679000" in "kube-system" namespace to be "Ready" ...
	I0717 15:56:46.854944   89092 pod_ready.go:38] duration metric: took 2.002565838s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 15:56:46.854966   89092 api_server.go:52] waiting for apiserver process to appear ...
	I0717 15:56:46.855026   89092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 15:56:46.867190   89092 api_server.go:72] duration metric: took 3.512757414s to wait for apiserver process to appear ...
	I0717 15:56:46.867201   89092 api_server.go:88] waiting for apiserver healthz status ...
	I0717 15:56:46.867232   89092 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:56160/healthz ...
	I0717 15:56:46.872630   89092 api_server.go:279] https://127.0.0.1:56160/healthz returned 200:
	ok
	I0717 15:56:46.874400   89092 api_server.go:141] control plane version: v1.27.3
	I0717 15:56:46.874411   89092 api_server.go:131] duration metric: took 7.206518ms to wait for apiserver health ...
	I0717 15:56:46.874419   89092 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 15:56:47.055572   89092 system_pods.go:59] 8 kube-system pods found
	I0717 15:56:47.055587   89092 system_pods.go:61] "coredns-5d78c9869d-d6sjd" [e9379c78-6d33-4fab-b65b-fa1659d6e15b] Running
	I0717 15:56:47.055592   89092 system_pods.go:61] "coredns-5d78c9869d-v4zjq" [70bf112b-6bab-4066-be2e-4faccaace1a2] Running
	I0717 15:56:47.055595   89092 system_pods.go:61] "etcd-enable-default-cni-679000" [453d01bc-3f86-469c-9132-269455b972ad] Running
	I0717 15:56:47.055599   89092 system_pods.go:61] "kube-apiserver-enable-default-cni-679000" [0685cb27-8fb6-4fbe-82e4-0af5a6c6d326] Running
	I0717 15:56:47.055603   89092 system_pods.go:61] "kube-controller-manager-enable-default-cni-679000" [1b8b1b1e-3680-400a-b2d8-956298320e6b] Running
	I0717 15:56:47.055607   89092 system_pods.go:61] "kube-proxy-nphd8" [c469a9e0-2b01-40e7-b38f-6dca4e8c6d36] Running
	I0717 15:56:47.055613   89092 system_pods.go:61] "kube-scheduler-enable-default-cni-679000" [d9687153-9ada-405e-a5c9-27bce46d6b7d] Running
	I0717 15:56:47.055618   89092 system_pods.go:61] "storage-provisioner" [7c668b09-f2e3-474d-b078-d01006900bf3] Running
	I0717 15:56:47.055622   89092 system_pods.go:74] duration metric: took 181.198351ms to wait for pod list to return data ...
	I0717 15:56:47.055628   89092 default_sa.go:34] waiting for default service account to be created ...
	I0717 15:56:47.253764   89092 default_sa.go:45] found service account: "default"
	I0717 15:56:47.253777   89092 default_sa.go:55] duration metric: took 198.143476ms for default service account to be created ...
	I0717 15:56:47.253783   89092 system_pods.go:116] waiting for k8s-apps to be running ...
	I0717 15:56:47.456924   89092 system_pods.go:86] 8 kube-system pods found
	I0717 15:56:47.456940   89092 system_pods.go:89] "coredns-5d78c9869d-d6sjd" [e9379c78-6d33-4fab-b65b-fa1659d6e15b] Running
	I0717 15:56:47.456945   89092 system_pods.go:89] "coredns-5d78c9869d-v4zjq" [70bf112b-6bab-4066-be2e-4faccaace1a2] Running
	I0717 15:56:47.456948   89092 system_pods.go:89] "etcd-enable-default-cni-679000" [453d01bc-3f86-469c-9132-269455b972ad] Running
	I0717 15:56:47.456952   89092 system_pods.go:89] "kube-apiserver-enable-default-cni-679000" [0685cb27-8fb6-4fbe-82e4-0af5a6c6d326] Running
	I0717 15:56:47.456956   89092 system_pods.go:89] "kube-controller-manager-enable-default-cni-679000" [1b8b1b1e-3680-400a-b2d8-956298320e6b] Running
	I0717 15:56:47.456961   89092 system_pods.go:89] "kube-proxy-nphd8" [c469a9e0-2b01-40e7-b38f-6dca4e8c6d36] Running
	I0717 15:56:47.456978   89092 system_pods.go:89] "kube-scheduler-enable-default-cni-679000" [d9687153-9ada-405e-a5c9-27bce46d6b7d] Running
	I0717 15:56:47.456986   89092 system_pods.go:89] "storage-provisioner" [7c668b09-f2e3-474d-b078-d01006900bf3] Running
	I0717 15:56:47.456998   89092 system_pods.go:126] duration metric: took 203.207892ms to wait for k8s-apps to be running ...
	I0717 15:56:47.457004   89092 system_svc.go:44] waiting for kubelet service to be running ....
	I0717 15:56:47.457062   89092 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 15:56:47.468452   89092 system_svc.go:56] duration metric: took 11.443468ms WaitForService to wait for kubelet.
	I0717 15:56:47.468466   89092 kubeadm.go:581] duration metric: took 4.114032323s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0717 15:56:47.468483   89092 node_conditions.go:102] verifying NodePressure condition ...
	I0717 15:56:47.653579   89092 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I0717 15:56:47.653594   89092 node_conditions.go:123] node cpu capacity is 6
	I0717 15:56:47.653605   89092 node_conditions.go:105] duration metric: took 185.114338ms to run NodePressure ...
	I0717 15:56:47.653636   89092 start.go:228] waiting for startup goroutines ...
	I0717 15:56:47.653658   89092 start.go:233] waiting for cluster config update ...
	I0717 15:56:47.653678   89092 start.go:242] writing updated cluster config ...
	I0717 15:56:47.654003   89092 ssh_runner.go:195] Run: rm -f paused
	I0717 15:56:47.692418   89092 start.go:578] kubectl: 1.27.2, cluster: 1.27.3 (minor skew: 0)
	I0717 15:56:47.714578   89092 out.go:177] * Done! kubectl is now configured to use "enable-default-cni-679000" cluster and "default" namespace by default
	I0717 15:56:48.394699   89231 ssh_runner.go:235] Completed: sudo systemctl restart containerd: (10.175891902s)
	I0717 15:56:48.394726   89231 start.go:466] detecting cgroup driver to use...
	I0717 15:56:48.394746   89231 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0717 15:56:48.395038   89231 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0717 15:56:48.417246   89231 cruntime.go:276] skipping containerd shutdown because we are bound to it
	I0717 15:56:48.417333   89231 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0717 15:56:48.431175   89231 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 15:56:48.452740   89231 ssh_runner.go:195] Run: which cri-dockerd
	I0717 15:56:48.458828   89231 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0717 15:56:48.470902   89231 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0717 15:56:48.496539   89231 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0717 15:56:48.596205   89231 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0717 15:56:48.688448   89231 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0717 15:56:48.688464   89231 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0717 15:56:48.706137   89231 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 15:56:48.790765   89231 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0717 15:56:49.084542   89231 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0717 15:56:49.160813   89231 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0717 15:56:49.245874   89231 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0717 15:56:49.323353   89231 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 15:56:49.403823   89231 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0717 15:56:49.470424   89231 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 15:56:49.587675   89231 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0717 15:56:49.686572   89231 start.go:513] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0717 15:56:49.686693   89231 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0717 15:56:49.692200   89231 start.go:534] Will wait 60s for crictl version
	I0717 15:56:49.692257   89231 ssh_runner.go:195] Run: which crictl
	I0717 15:56:49.696701   89231 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 15:56:49.758186   89231 start.go:550] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.4
	RuntimeApiVersion:  v1
	I0717 15:56:49.758262   89231 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0717 15:56:49.783598   89231 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0717 15:56:49.835957   89231 out.go:204] * Preparing Kubernetes v1.27.3 on Docker 24.0.4 ...
	I0717 15:56:49.836101   89231 cli_runner.go:164] Run: docker exec -t kubernetes-upgrade-420000 dig +short host.docker.internal
	I0717 15:56:49.956112   89231 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I0717 15:56:49.956238   89231 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I0717 15:56:49.961395   89231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-420000
	I0717 15:56:50.012958   89231 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0717 15:56:50.013044   89231 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0717 15:56:50.034011   89231 docker.go:636] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.27.3
	registry.k8s.io/kube-controller-manager:v1.27.3
	registry.k8s.io/kube-scheduler:v1.27.3
	registry.k8s.io/kube-proxy:v1.27.3
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0717 15:56:50.034028   89231 docker.go:566] Images already preloaded, skipping extraction
	I0717 15:56:50.034095   89231 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0717 15:56:50.055385   89231 docker.go:636] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.27.3
	registry.k8s.io/kube-controller-manager:v1.27.3
	registry.k8s.io/kube-scheduler:v1.27.3
	registry.k8s.io/kube-proxy:v1.27.3
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0717 15:56:50.055403   89231 cache_images.go:84] Images are preloaded, skipping loading
	I0717 15:56:50.055488   89231 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0717 15:56:50.110436   89231 cni.go:84] Creating CNI manager for ""
	I0717 15:56:50.110451   89231 cni.go:149] "docker" driver + "docker" runtime found, recommending kindnet
	I0717 15:56:50.110473   89231 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0717 15:56:50.110491   89231 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.27.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-420000 NodeName:kubernetes-upgrade-420000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 15:56:50.110637   89231 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "kubernetes-upgrade-420000"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 15:56:50.110723   89231 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=kubernetes-upgrade-420000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.3 ClusterName:kubernetes-upgrade-420000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0717 15:56:50.110798   89231 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.3
	I0717 15:56:50.120529   89231 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 15:56:50.120585   89231 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 15:56:50.129941   89231 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (386 bytes)
	I0717 15:56:50.146402   89231 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 15:56:50.163515   89231 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2108 bytes)
	I0717 15:56:50.181134   89231 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0717 15:56:50.204802   89231 certs.go:56] Setting up /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/kubernetes-upgrade-420000 for IP: 192.168.76.2
	I0717 15:56:50.204830   89231 certs.go:190] acquiring lock for shared ca certs: {Name:mk8dc1f2afa352f9c2168154d4ab47beda1b6a37 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 15:56:50.205002   89231 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/16899-76867/.minikube/ca.key
	I0717 15:56:50.205070   89231 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/16899-76867/.minikube/proxy-client-ca.key
	I0717 15:56:50.205173   89231 certs.go:315] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/kubernetes-upgrade-420000/client.key
	I0717 15:56:50.205256   89231 certs.go:315] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/kubernetes-upgrade-420000/apiserver.key.31bdca25
	I0717 15:56:50.205319   89231 certs.go:315] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/kubernetes-upgrade-420000/proxy-client.key
	I0717 15:56:50.205548   89231 certs.go:437] found cert: /Users/jenkins/minikube-integration/16899-76867/.minikube/certs/Users/jenkins/minikube-integration/16899-76867/.minikube/certs/77324.pem (1338 bytes)
	W0717 15:56:50.205590   89231 certs.go:433] ignoring /Users/jenkins/minikube-integration/16899-76867/.minikube/certs/Users/jenkins/minikube-integration/16899-76867/.minikube/certs/77324_empty.pem, impossibly tiny 0 bytes
	I0717 15:56:50.205606   89231 certs.go:437] found cert: /Users/jenkins/minikube-integration/16899-76867/.minikube/certs/Users/jenkins/minikube-integration/16899-76867/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 15:56:50.205646   89231 certs.go:437] found cert: /Users/jenkins/minikube-integration/16899-76867/.minikube/certs/Users/jenkins/minikube-integration/16899-76867/.minikube/certs/ca.pem (1078 bytes)
	I0717 15:56:50.205682   89231 certs.go:437] found cert: /Users/jenkins/minikube-integration/16899-76867/.minikube/certs/Users/jenkins/minikube-integration/16899-76867/.minikube/certs/cert.pem (1123 bytes)
	I0717 15:56:50.205723   89231 certs.go:437] found cert: /Users/jenkins/minikube-integration/16899-76867/.minikube/certs/Users/jenkins/minikube-integration/16899-76867/.minikube/certs/key.pem (1675 bytes)
	I0717 15:56:50.205804   89231 certs.go:437] found cert: /Users/jenkins/minikube-integration/16899-76867/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/16899-76867/.minikube/files/etc/ssl/certs/773242.pem (1708 bytes)
	I0717 15:56:50.206364   89231 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/kubernetes-upgrade-420000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0717 15:56:50.228095   89231 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/kubernetes-upgrade-420000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 15:56:50.251622   89231 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/kubernetes-upgrade-420000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 15:56:50.273733   89231 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/kubernetes-upgrade-420000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 15:56:50.297435   89231 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 15:56:50.320027   89231 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 15:56:50.342634   89231 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 15:56:50.364495   89231 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0717 15:56:50.387772   89231 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 15:56:50.413467   89231 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/certs/77324.pem --> /usr/share/ca-certificates/77324.pem (1338 bytes)
	I0717 15:56:50.435885   89231 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/files/etc/ssl/certs/773242.pem --> /usr/share/ca-certificates/773242.pem (1708 bytes)
	I0717 15:56:50.466050   89231 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 15:56:50.488643   89231 ssh_runner.go:195] Run: openssl version
	I0717 15:56:50.497663   89231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 15:56:50.511805   89231 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 15:56:50.518536   89231 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jul 17 22:07 /usr/share/ca-certificates/minikubeCA.pem
	I0717 15:56:50.518663   89231 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 15:56:50.549191   89231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 15:56:50.568177   89231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/77324.pem && ln -fs /usr/share/ca-certificates/77324.pem /etc/ssl/certs/77324.pem"
	I0717 15:56:50.585086   89231 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/77324.pem
	I0717 15:56:50.592147   89231 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jul 17 22:13 /usr/share/ca-certificates/77324.pem
	I0717 15:56:50.592227   89231 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/77324.pem
	I0717 15:56:50.649765   89231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/77324.pem /etc/ssl/certs/51391683.0"
	I0717 15:56:50.664825   89231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/773242.pem && ln -fs /usr/share/ca-certificates/773242.pem /etc/ssl/certs/773242.pem"
	I0717 15:56:50.683373   89231 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/773242.pem
	I0717 15:56:50.749739   89231 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jul 17 22:13 /usr/share/ca-certificates/773242.pem
	I0717 15:56:50.749827   89231 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/773242.pem
	I0717 15:56:50.759318   89231 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/773242.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 15:56:50.773520   89231 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0717 15:56:50.780277   89231 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 15:56:50.791254   89231 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 15:56:50.853012   89231 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 15:56:50.863783   89231 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 15:56:50.875045   89231 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 15:56:50.885930   89231 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 15:56:50.897489   89231 kubeadm.go:404] StartCluster: {Name:kubernetes-upgrade-420000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:kubernetes-upgrade-420000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 15:56:50.897753   89231 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0717 15:56:50.966954   89231 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 15:56:50.983049   89231 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0717 15:56:50.983068   89231 kubeadm.go:636] restartCluster start
	I0717 15:56:50.983143   89231 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 15:56:51.050493   89231 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 15:56:51.050578   89231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-420000
	I0717 15:56:51.114627   89231 kubeconfig.go:92] found "kubernetes-upgrade-420000" server: "https://127.0.0.1:55677"
	I0717 15:56:51.115426   89231 kapi.go:59] client config for kubernetes-upgrade-420000: &rest.Config{Host:"https://127.0.0.1:55677", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/kubernetes-upgrade-420000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/kubernetes-upgrade-420000/client.key", CAFile:"/Users/jenkins/minikube-integration/16899-76867/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil)
, CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2586c40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0717 15:56:51.116197   89231 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 15:56:51.156675   89231 api_server.go:166] Checking apiserver status ...
	I0717 15:56:51.156794   89231 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 15:56:51.172900   89231 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 15:56:51.674422   89231 api_server.go:166] Checking apiserver status ...
	I0717 15:56:51.674564   89231 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 15:56:51.694836   89231 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/14005/cgroup
	W0717 15:56:51.749805   89231 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/14005/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0717 15:56:51.749895   89231 ssh_runner.go:195] Run: ls
	I0717 15:56:51.756322   89231 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:55677/healthz ...
	I0717 15:56:53.804568   89231 api_server.go:279] https://127.0.0.1:55677/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 15:56:53.804605   89231 retry.go:31] will retry after 301.921759ms: https://127.0.0.1:55677/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 15:56:54.107946   89231 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:55677/healthz ...
	I0717 15:56:54.114980   89231 api_server.go:279] https://127.0.0.1:55677/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0717 15:56:54.115000   89231 retry.go:31] will retry after 268.309938ms: https://127.0.0.1:55677/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0717 15:56:54.383392   89231 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:55677/healthz ...
	I0717 15:56:54.388575   89231 api_server.go:279] https://127.0.0.1:55677/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0717 15:56:54.388592   89231 retry.go:31] will retry after 345.552692ms: https://127.0.0.1:55677/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0717 15:56:54.735579   89231 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:55677/healthz ...
	I0717 15:56:54.743008   89231 api_server.go:279] https://127.0.0.1:55677/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0717 15:56:54.743027   89231 retry.go:31] will retry after 530.478749ms: https://127.0.0.1:55677/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0717 15:56:55.274466   89231 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:55677/healthz ...
	I0717 15:56:55.280542   89231 api_server.go:279] https://127.0.0.1:55677/healthz returned 200:
	ok
	I0717 15:56:55.292677   89231 system_pods.go:86] 5 kube-system pods found
	I0717 15:56:55.292695   89231 system_pods.go:89] "etcd-kubernetes-upgrade-420000" [94fa6439-534c-4b3f-bf5d-0331bd326cb0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0717 15:56:55.292703   89231 system_pods.go:89] "kube-apiserver-kubernetes-upgrade-420000" [c88b962c-5b12-4485-818c-3763472b18e4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0717 15:56:55.292714   89231 system_pods.go:89] "kube-controller-manager-kubernetes-upgrade-420000" [c2776920-6748-489e-85bb-8648a397549b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0717 15:56:55.292724   89231 system_pods.go:89] "kube-scheduler-kubernetes-upgrade-420000" [234256f9-9af5-4dc0-a474-19ed811eb833] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0717 15:56:55.292728   89231 system_pods.go:89] "storage-provisioner" [b502bf26-c5d7-4c69-a3a4-1f4a8e380cea] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling..)
	I0717 15:56:55.292736   89231 kubeadm.go:620] needs reconfigure: missing components: kube-dns, kube-proxy
	I0717 15:56:55.292743   89231 kubeadm.go:1128] stopping kube-system containers ...
	I0717 15:56:55.292812   89231 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0717 15:56:55.314014   89231 docker.go:462] Stopping containers: [86137de89163 96a6f5f2a73e 04ce2f4205d8 7ff7e0d5f99c 22f72919c6dc fd8d8c5ec91c 2ea372cab48d bd3fb6071116 ed04302551e8 74b779ef850a 3c3e45082577 7f0a300009a3 56bdf87f3ded 355954d3173c 3fe0505b2c09 4c0372a5c9dd]
	I0717 15:56:55.314095   89231 ssh_runner.go:195] Run: docker stop 86137de89163 96a6f5f2a73e 04ce2f4205d8 7ff7e0d5f99c 22f72919c6dc fd8d8c5ec91c 2ea372cab48d bd3fb6071116 ed04302551e8 74b779ef850a 3c3e45082577 7f0a300009a3 56bdf87f3ded 355954d3173c 3fe0505b2c09 4c0372a5c9dd
	I0717 15:56:55.749924   89231 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0717 15:56:55.850103   89231 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 15:56:55.864426   89231 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Jul 17 22:56 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Jul 17 22:56 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2039 Jul 17 22:56 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Jul 17 22:56 /etc/kubernetes/scheduler.conf
	
	I0717 15:56:55.864516   89231 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 15:56:55.879116   89231 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 15:56:55.946798   89231 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 15:56:55.959637   89231 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0717 15:56:55.959705   89231 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 15:56:55.969248   89231 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 15:56:55.980035   89231 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0717 15:56:55.980116   89231 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 15:56:55.997410   89231 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 15:56:56.006775   89231 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0717 15:56:56.006785   89231 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 15:56:56.065924   89231 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 15:56:56.579226   89231 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0717 15:56:56.719038   89231 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 15:56:56.772417   89231 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0717 15:56:56.863261   89231 api_server.go:52] waiting for apiserver process to appear ...
	I0717 15:56:56.863356   89231 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 15:56:57.377829   89231 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 15:56:57.878877   89231 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 15:56:57.892818   89231 api_server.go:72] duration metric: took 1.029540917s to wait for apiserver process to appear ...
	I0717 15:56:57.892839   89231 api_server.go:88] waiting for apiserver healthz status ...
	I0717 15:56:57.892851   89231 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:55677/healthz ...
	I0717 15:57:00.086501   89231 api_server.go:279] https://127.0.0.1:55677/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 15:57:00.086521   89231 api_server.go:103] status: https://127.0.0.1:55677/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 15:57:00.586630   89231 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:55677/healthz ...
	I0717 15:57:00.591517   89231 api_server.go:279] https://127.0.0.1:55677/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0717 15:57:00.591540   89231 api_server.go:103] status: https://127.0.0.1:55677/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0717 15:57:01.086700   89231 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:55677/healthz ...
	I0717 15:57:01.092886   89231 api_server.go:279] https://127.0.0.1:55677/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0717 15:57:01.092907   89231 api_server.go:103] status: https://127.0.0.1:55677/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0717 15:57:01.587410   89231 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:55677/healthz ...
	I0717 15:57:01.594838   89231 api_server.go:279] https://127.0.0.1:55677/healthz returned 200:
	ok
	I0717 15:57:01.603041   89231 api_server.go:141] control plane version: v1.27.3
	I0717 15:57:01.603062   89231 api_server.go:131] duration metric: took 3.710190073s to wait for apiserver health ...
	I0717 15:57:01.603068   89231 cni.go:84] Creating CNI manager for ""
	I0717 15:57:01.603075   89231 cni.go:149] "docker" driver + "docker" runtime found, recommending kindnet
	I0717 15:57:01.624690   89231 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0717 15:57:01.647742   89231 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0717 15:57:01.654928   89231 cni.go:188] applying CNI manifest using /var/lib/minikube/binaries/v1.27.3/kubectl ...
	I0717 15:57:01.654939   89231 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0717 15:57:01.671398   89231 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0717 15:57:02.371327   89231 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 15:57:02.378057   89231 system_pods.go:59] 5 kube-system pods found
	I0717 15:57:02.378071   89231 system_pods.go:61] "etcd-kubernetes-upgrade-420000" [94fa6439-534c-4b3f-bf5d-0331bd326cb0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0717 15:57:02.378078   89231 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-420000" [c88b962c-5b12-4485-818c-3763472b18e4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0717 15:57:02.378089   89231 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-420000" [c2776920-6748-489e-85bb-8648a397549b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0717 15:57:02.378097   89231 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-420000" [234256f9-9af5-4dc0-a474-19ed811eb833] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0717 15:57:02.378110   89231 system_pods.go:61] "storage-provisioner" [b502bf26-c5d7-4c69-a3a4-1f4a8e380cea] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling..)
	I0717 15:57:02.378119   89231 system_pods.go:74] duration metric: took 6.780762ms to wait for pod list to return data ...
	I0717 15:57:02.378128   89231 node_conditions.go:102] verifying NodePressure condition ...
	I0717 15:57:02.381475   89231 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I0717 15:57:02.381489   89231 node_conditions.go:123] node cpu capacity is 6
	I0717 15:57:02.381500   89231 node_conditions.go:105] duration metric: took 3.367538ms to run NodePressure ...
	I0717 15:57:02.381512   89231 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 15:57:02.517384   89231 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 15:57:02.526071   89231 ops.go:34] apiserver oom_adj: -16
	I0717 15:57:02.526082   89231 kubeadm.go:640] restartCluster took 11.542937576s
	I0717 15:57:02.526087   89231 kubeadm.go:406] StartCluster complete in 11.628541877s
	I0717 15:57:02.526097   89231 settings.go:142] acquiring lock: {Name:mkcd1c9566f766bc2df0b9039d6e9d173f23ce3a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 15:57:02.526200   89231 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/16899-76867/kubeconfig
	I0717 15:57:02.526961   89231 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16899-76867/kubeconfig: {Name:mk7ebdcff64e7ccd84e22cec95bc3c8ecbf54564 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 15:57:02.527201   89231 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0717 15:57:02.527235   89231 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0717 15:57:02.527302   89231 addons.go:69] Setting default-storageclass=true in profile "kubernetes-upgrade-420000"
	I0717 15:57:02.527301   89231 addons.go:69] Setting storage-provisioner=true in profile "kubernetes-upgrade-420000"
	I0717 15:57:02.527320   89231 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kubernetes-upgrade-420000"
	I0717 15:57:02.527333   89231 addons.go:231] Setting addon storage-provisioner=true in "kubernetes-upgrade-420000"
	W0717 15:57:02.527339   89231 addons.go:240] addon storage-provisioner should already be in state true
	I0717 15:57:02.527374   89231 host.go:66] Checking if "kubernetes-upgrade-420000" exists ...
	I0717 15:57:02.527376   89231 config.go:182] Loaded profile config "kubernetes-upgrade-420000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0717 15:57:02.527623   89231 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-420000 --format={{.State.Status}}
	I0717 15:57:02.527684   89231 kapi.go:59] client config for kubernetes-upgrade-420000: &rest.Config{Host:"https://127.0.0.1:55677", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/kubernetes-upgrade-420000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/kubernetes-upgrade-420000/client.key", CAFile:"/Users/jenkins/minikube-integration/16899-76867/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil)
, CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2586c40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0717 15:57:02.527768   89231 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-420000 --format={{.State.Status}}
	I0717 15:57:02.534423   89231 kapi.go:248] "coredns" deployment in "kube-system" namespace and "kubernetes-upgrade-420000" context rescaled to 1 replicas
	I0717 15:57:02.534484   89231 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0717 15:57:02.556980   89231 out.go:177] * Verifying Kubernetes components...
	I0717 15:57:02.615116   89231 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 15:57:02.623907   89231 kapi.go:59] client config for kubernetes-upgrade-420000: &rest.Config{Host:"https://127.0.0.1:55677", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/kubernetes-upgrade-420000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/kubernetes-upgrade-420000/client.key", CAFile:"/Users/jenkins/minikube-integration/16899-76867/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil)
, CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2586c40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0717 15:57:02.643945   89231 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 15:57:02.625528   89231 start.go:874] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0717 15:57:02.630526   89231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-420000
	I0717 15:57:02.651762   89231 addons.go:231] Setting addon default-storageclass=true in "kubernetes-upgrade-420000"
	W0717 15:57:02.665108   89231 addons.go:240] addon default-storageclass should already be in state true
	I0717 15:57:02.665142   89231 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 15:57:02.665152   89231 host.go:66] Checking if "kubernetes-upgrade-420000" exists ...
	I0717 15:57:02.665162   89231 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 15:57:02.665285   89231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-420000
	I0717 15:57:02.668623   89231 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-420000 --format={{.State.Status}}
	I0717 15:57:02.729500   89231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55673 SSHKeyPath:/Users/jenkins/minikube-integration/16899-76867/.minikube/machines/kubernetes-upgrade-420000/id_rsa Username:docker}
	I0717 15:57:02.729735   89231 api_server.go:52] waiting for apiserver process to appear ...
	I0717 15:57:02.729800   89231 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 15:57:02.729817   89231 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 15:57:02.729829   89231 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 15:57:02.729907   89231 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-420000
	I0717 15:57:02.743699   89231 api_server.go:72] duration metric: took 209.18131ms to wait for apiserver process to appear ...
	I0717 15:57:02.743722   89231 api_server.go:88] waiting for apiserver healthz status ...
	I0717 15:57:02.743736   89231 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:55677/healthz ...
	I0717 15:57:02.750770   89231 api_server.go:279] https://127.0.0.1:55677/healthz returned 200:
	ok
	I0717 15:57:02.752660   89231 api_server.go:141] control plane version: v1.27.3
	I0717 15:57:02.752675   89231 api_server.go:131] duration metric: took 8.946611ms to wait for apiserver health ...
	I0717 15:57:02.752683   89231 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 15:57:02.757469   89231 system_pods.go:59] 5 kube-system pods found
	I0717 15:57:02.757488   89231 system_pods.go:61] "etcd-kubernetes-upgrade-420000" [94fa6439-534c-4b3f-bf5d-0331bd326cb0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0717 15:57:02.757501   89231 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-420000" [c88b962c-5b12-4485-818c-3763472b18e4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0717 15:57:02.757512   89231 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-420000" [c2776920-6748-489e-85bb-8648a397549b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0717 15:57:02.757519   89231 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-420000" [234256f9-9af5-4dc0-a474-19ed811eb833] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0717 15:57:02.757524   89231 system_pods.go:61] "storage-provisioner" [b502bf26-c5d7-4c69-a3a4-1f4a8e380cea] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling..)
	I0717 15:57:02.757532   89231 system_pods.go:74] duration metric: took 4.843574ms to wait for pod list to return data ...
	I0717 15:57:02.757539   89231 kubeadm.go:581] duration metric: took 223.029438ms to wait for : map[apiserver:true system_pods:true] ...
	I0717 15:57:02.757550   89231 node_conditions.go:102] verifying NodePressure condition ...
	I0717 15:57:02.761162   89231 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I0717 15:57:02.761174   89231 node_conditions.go:123] node cpu capacity is 6
	I0717 15:57:02.761189   89231 node_conditions.go:105] duration metric: took 3.628011ms to run NodePressure ...
	I0717 15:57:02.761198   89231 start.go:228] waiting for startup goroutines ...
	I0717 15:57:02.785541   89231 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:55673 SSHKeyPath:/Users/jenkins/minikube-integration/16899-76867/.minikube/machines/kubernetes-upgrade-420000/id_rsa Username:docker}
	I0717 15:57:02.832706   89231 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 15:57:02.887908   89231 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 15:57:03.490006   89231 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0717 15:57:03.530955   89231 addons.go:502] enable addons completed in 1.003709377s: enabled=[storage-provisioner default-storageclass]
	I0717 15:57:03.531067   89231 start.go:233] waiting for cluster config update ...
	I0717 15:57:03.531092   89231 start.go:242] writing updated cluster config ...
	I0717 15:57:03.531714   89231 ssh_runner.go:195] Run: rm -f paused
	I0717 15:57:03.573409   89231 start.go:578] kubectl: 1.27.2, cluster: 1.27.3 (minor skew: 0)
	I0717 15:57:03.594655   89231 out.go:177] * Done! kubectl is now configured to use "kubernetes-upgrade-420000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* Jul 17 22:56:49 kubernetes-upgrade-420000 cri-dockerd[13275]: time="2023-07-17T22:56:49Z" level=info msg="Setting cgroupDriver cgroupfs"
	Jul 17 22:56:49 kubernetes-upgrade-420000 cri-dockerd[13275]: time="2023-07-17T22:56:49Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Jul 17 22:56:49 kubernetes-upgrade-420000 cri-dockerd[13275]: time="2023-07-17T22:56:49Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Jul 17 22:56:49 kubernetes-upgrade-420000 cri-dockerd[13275]: time="2023-07-17T22:56:49Z" level=info msg="Start cri-dockerd grpc backend"
	Jul 17 22:56:49 kubernetes-upgrade-420000 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	Jul 17 22:56:50 kubernetes-upgrade-420000 cri-dockerd[13275]: time="2023-07-17T22:56:50Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/2ea372cab48dac48d5dd0c99025852a6d00cfd926a0758d420e3afe99f9c3f7a/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Jul 17 22:56:50 kubernetes-upgrade-420000 cri-dockerd[13275]: time="2023-07-17T22:56:50Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/fd8d8c5ec91c0d705d71ef0e0198d75ee31778254d8862ae246ade65291b6f03/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Jul 17 22:56:50 kubernetes-upgrade-420000 cri-dockerd[13275]: time="2023-07-17T22:56:50Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/22f72919c6dc9b05637c25a8d3f3f16c9bf1323d03a133c357c62f3482a42cc5/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Jul 17 22:56:55 kubernetes-upgrade-420000 dockerd[12920]: time="2023-07-17T22:56:55.379021997Z" level=info msg="ignoring event" container=fd8d8c5ec91c0d705d71ef0e0198d75ee31778254d8862ae246ade65291b6f03 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 17 22:56:55 kubernetes-upgrade-420000 dockerd[12920]: time="2023-07-17T22:56:55.390321650Z" level=info msg="ignoring event" container=7ff7e0d5f99c92517c2d587b9c0cb7cc3a3d6b87c5693b9603133b7f2277536d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 17 22:56:55 kubernetes-upgrade-420000 dockerd[12920]: time="2023-07-17T22:56:55.390377940Z" level=info msg="ignoring event" container=bd3fb6071116916649826367f6aff5282991dfb10ccbadaa2fa509ac734b444b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 17 22:56:55 kubernetes-upgrade-420000 dockerd[12920]: time="2023-07-17T22:56:55.390395520Z" level=info msg="ignoring event" container=22f72919c6dc9b05637c25a8d3f3f16c9bf1323d03a133c357c62f3482a42cc5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 17 22:56:55 kubernetes-upgrade-420000 dockerd[12920]: time="2023-07-17T22:56:55.451279580Z" level=info msg="ignoring event" container=2ea372cab48dac48d5dd0c99025852a6d00cfd926a0758d420e3afe99f9c3f7a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 17 22:56:55 kubernetes-upgrade-420000 dockerd[12920]: time="2023-07-17T22:56:55.453020165Z" level=info msg="ignoring event" container=04ce2f4205d82c3e4fb8e61e99850a58f03ee1c5f62dd71dc820327fa52f95d1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 17 22:56:55 kubernetes-upgrade-420000 dockerd[12920]: time="2023-07-17T22:56:55.461939979Z" level=info msg="ignoring event" container=96a6f5f2a73ea393b9b1584e56c71af347b3d1a0e02d8c81d469980cce2cbfe1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 17 22:56:55 kubernetes-upgrade-420000 dockerd[12920]: time="2023-07-17T22:56:55.674319442Z" level=info msg="ignoring event" container=86137de891636ad0e0b21e97c31848b7027e2767b6dc384c23888fa433e9c128 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 17 22:56:55 kubernetes-upgrade-420000 cri-dockerd[13275]: time="2023-07-17T22:56:55Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/191eb6468b8d324c4989ca129e296a0e4903ef76cd7388e9f6fdd19b4e79376c/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Jul 17 22:56:55 kubernetes-upgrade-420000 cri-dockerd[13275]: W0717 22:56:55.863237   13275 logging.go:59] [core] [Server #1] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	Jul 17 22:56:55 kubernetes-upgrade-420000 cri-dockerd[13275]: time="2023-07-17T22:56:55Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b7f8166478870d394c9e2c91138475591f95e1f6e6eee12c2eae50c39c27283a/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Jul 17 22:56:55 kubernetes-upgrade-420000 cri-dockerd[13275]: W0717 22:56:55.889307   13275 logging.go:59] [core] [Server #1] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	Jul 17 22:56:56 kubernetes-upgrade-420000 cri-dockerd[13275]: time="2023-07-17T22:56:56Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/8662a7895349cd25a3ebedee0eae5a8f3acdecf89aa2e259644c40f6dd9f0ebc/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Jul 17 22:56:56 kubernetes-upgrade-420000 cri-dockerd[13275]: W0717 22:56:56.037692   13275 logging.go:59] [core] [Server #1] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	Jul 17 22:56:56 kubernetes-upgrade-420000 cri-dockerd[13275]: time="2023-07-17T22:56:56Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/3bbb2281594d5448f1b1415c37c415193bdb13b0394ea787a25722f12a788994/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Jul 17 22:56:56 kubernetes-upgrade-420000 cri-dockerd[13275]: W0717 22:56:56.037997   13275 logging.go:59] [core] [Server #1] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	Jul 17 22:56:56 kubernetes-upgrade-420000 cri-dockerd[13275]: time="2023-07-17T22:56:56Z" level=error msg="Failed to retrieve checkpoint for sandbox 3fe0505b2c0989b872c84f21b2635981f6bce3ffc4c0f841ad77ec3a2f7af125: checkpoint is not found"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	c61f75f91f530       41697ceeb70b3       8 seconds ago       Running             kube-scheduler            2                   191eb6468b8d3       kube-scheduler-kubernetes-upgrade-420000
	86960f7b0bd62       7cffc01dba0e1       8 seconds ago       Running             kube-controller-manager   2                   b7f8166478870       kube-controller-manager-kubernetes-upgrade-420000
	24e819ae3a490       08a0c939e61b7       8 seconds ago       Running             kube-apiserver            2                   3bbb2281594d5       kube-apiserver-kubernetes-upgrade-420000
	825d19925827a       86b6af7dd652c       8 seconds ago       Running             etcd                      2                   8662a7895349c       etcd-kubernetes-upgrade-420000
	86137de891636       08a0c939e61b7       15 seconds ago      Exited              kube-apiserver            1                   22f72919c6dc9       kube-apiserver-kubernetes-upgrade-420000
	96a6f5f2a73ea       7cffc01dba0e1       15 seconds ago      Exited              kube-controller-manager   1                   fd8d8c5ec91c0       kube-controller-manager-kubernetes-upgrade-420000
	04ce2f4205d82       86b6af7dd652c       15 seconds ago      Exited              etcd                      1                   2ea372cab48da       etcd-kubernetes-upgrade-420000
	7ff7e0d5f99c9       41697ceeb70b3       15 seconds ago      Exited              kube-scheduler            1                   bd3fb60711169       kube-scheduler-kubernetes-upgrade-420000
	
	* 
	* ==> describe nodes <==
	* Name:               kubernetes-upgrade-420000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-420000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b76e7e219387ed29a8027b03764cb35e04d80ac8
	                    minikube.k8s.io/name=kubernetes-upgrade-420000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_07_17T15_56_33_0700
	                    minikube.k8s.io/version=v1.31.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 17 Jul 2023 22:56:29 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-420000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 17 Jul 2023 22:57:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 17 Jul 2023 22:57:00 +0000   Mon, 17 Jul 2023 22:56:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 17 Jul 2023 22:57:00 +0000   Mon, 17 Jul 2023 22:56:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 17 Jul 2023 22:57:00 +0000   Mon, 17 Jul 2023 22:56:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Mon, 17 Jul 2023 22:57:00 +0000   Mon, 17 Jul 2023 22:56:27 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    kubernetes-upgrade-420000
	Capacity:
	  cpu:                6
	  ephemeral-storage:  61202244Ki
	  hugepages-2Mi:      0
	  memory:             6085660Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  61202244Ki
	  hugepages-2Mi:      0
	  memory:             6085660Ki
	  pods:               110
	System Info:
	  Machine ID:                 d37b7dc876964f488dea2177b20f01c9
	  System UUID:                d37b7dc876964f488dea2177b20f01c9
	  Boot ID:                    39ad526a-f9da-4327-9b2d-183cb5a85afa
	  Kernel Version:             5.15.49-linuxkit-pr
	  OS Image:                   Ubuntu 22.04.2 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.4
	  Kubelet Version:            v1.27.3
	  Kube-Proxy Version:         v1.27.3
	Non-terminated Pods:          (4 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-kubernetes-upgrade-420000                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         33s
	  kube-system                 kube-apiserver-kubernetes-upgrade-420000             250m (4%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         33s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-420000    200m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         33s
	  kube-system                 kube-scheduler-kubernetes-upgrade-420000             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         34s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (10%!)(MISSING)  0 (0%!)(MISSING)
	  memory             100Mi (1%!)(MISSING)  0 (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age              From     Message
	  ----    ------                   ----             ----     -------
	  Normal  Starting                 33s              kubelet  Starting kubelet.
	  Normal  NodeAllocatableEnforced  33s              kubelet  Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  33s              kubelet  Node kubernetes-upgrade-420000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    33s              kubelet  Node kubernetes-upgrade-420000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     33s              kubelet  Node kubernetes-upgrade-420000 status is now: NodeHasSufficientPID
	  Normal  Starting                 9s               kubelet  Starting kubelet.
	  Normal  NodeAllocatableEnforced  9s               kubelet  Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8s (x9 over 9s)  kubelet  Node kubernetes-upgrade-420000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8s (x7 over 9s)  kubelet  Node kubernetes-upgrade-420000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8s (x7 over 9s)  kubelet  Node kubernetes-upgrade-420000 status is now: NodeHasSufficientPID
	
	* 
	* ==> dmesg <==
	* [  +0.000172] FS-Cache: O-key=[8] 'cd0a250700000000'
	[  +0.000043] FS-Cache: N-cookie c=0000003d [p=00000035 fl=2 nc=0 na=1]
	[  +0.000067] FS-Cache: N-cookie d=000000008be41ffa{9p.inode} n=000000001e091c4e
	[  +0.000098] FS-Cache: N-key=[8] 'cd0a250700000000'
	[  +0.001512] FS-Cache: Duplicate cookie detected
	[  +0.000033] FS-Cache: O-cookie c=00000037 [p=00000035 fl=226 nc=0 na=1]
	[  +0.000053] FS-Cache: O-cookie d=000000008be41ffa{9p.inode} n=00000000a44ec40c
	[  +0.000051] FS-Cache: O-key=[8] 'cd0a250700000000'
	[  +0.000131] FS-Cache: N-cookie c=0000003e [p=00000035 fl=2 nc=0 na=1]
	[  +0.000067] FS-Cache: N-cookie d=000000008be41ffa{9p.inode} n=00000000755caf55
	[  +0.000095] FS-Cache: N-key=[8] 'cd0a250700000000'
	[  +2.244597] FS-Cache: Duplicate cookie detected
	[  +0.000054] FS-Cache: O-cookie c=00000038 [p=00000035 fl=226 nc=0 na=1]
	[  +0.000147] FS-Cache: O-cookie d=000000008be41ffa{9p.inode} n=00000000234ad572
	[  +0.000067] FS-Cache: O-key=[8] 'cc0a250700000000'
	[  +0.000086] FS-Cache: N-cookie c=00000041 [p=00000035 fl=2 nc=0 na=1]
	[  +0.000104] FS-Cache: N-cookie d=000000008be41ffa{9p.inode} n=00000000d8747407
	[  +0.000091] FS-Cache: N-key=[8] 'cc0a250700000000'
	[  +0.503259] FS-Cache: Duplicate cookie detected
	[  +0.000061] FS-Cache: O-cookie c=0000003b [p=00000035 fl=226 nc=0 na=1]
	[  +0.000042] FS-Cache: O-cookie d=000000008be41ffa{9p.inode} n=000000006f90313b
	[  +0.000035] FS-Cache: O-key=[8] 'e90a250700000000'
	[  +0.000132] FS-Cache: N-cookie c=00000042 [p=00000035 fl=2 nc=0 na=1]
	[  +0.000044] FS-Cache: N-cookie d=000000008be41ffa{9p.inode} n=0000000058a88934
	[  +0.000196] FS-Cache: N-key=[8] 'e90a250700000000'
	
	* 
	* ==> etcd [04ce2f4205d8] <==
	* {"level":"info","ts":"2023-07-17T22:56:51.177Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-07-17T22:56:51.177Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-07-17T22:56:51.177Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-07-17T22:56:51.177Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2023-07-17T22:56:51.177Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2023-07-17T22:56:52.766Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2023-07-17T22:56:52.766Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2023-07-17T22:56:52.766Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2023-07-17T22:56:52.766Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2023-07-17T22:56:52.766Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2023-07-17T22:56:52.766Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2023-07-17T22:56:52.766Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2023-07-17T22:56:52.767Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-07-17T22:56:52.768Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-07-17T22:56:52.768Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:kubernetes-upgrade-420000 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2023-07-17T22:56:52.768Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-07-17T22:56:52.768Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-07-17T22:56:52.769Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2023-07-17T22:56:52.769Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-07-17T22:56:55.348Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2023-07-17T22:56:55.348Z","caller":"embed/etcd.go:373","msg":"closing etcd server","name":"kubernetes-upgrade-420000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	{"level":"info","ts":"2023-07-17T22:56:55.363Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"ea7e25599daad906","current-leader-member-id":"ea7e25599daad906"}
	{"level":"info","ts":"2023-07-17T22:56:55.365Z","caller":"embed/etcd.go:568","msg":"stopping serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2023-07-17T22:56:55.366Z","caller":"embed/etcd.go:573","msg":"stopped serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2023-07-17T22:56:55.366Z","caller":"embed/etcd.go:375","msg":"closed etcd server","name":"kubernetes-upgrade-420000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	
	* 
	* ==> etcd [825d19925827] <==
	* {"level":"info","ts":"2023-07-17T22:56:57.474Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-07-17T22:56:57.474Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2023-07-17T22:56:57.474Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-07-17T22:56:57.474Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-07-17T22:56:57.474Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-17T22:56:57.474Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-17T22:56:57.476Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-07-17T22:56:57.476Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-07-17T22:56:57.476Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-07-17T22:56:57.476Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2023-07-17T22:56:57.476Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2023-07-17T22:56:59.167Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 3"}
	{"level":"info","ts":"2023-07-17T22:56:59.167Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 3"}
	{"level":"info","ts":"2023-07-17T22:56:59.167Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2023-07-17T22:56:59.167Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 4"}
	{"level":"info","ts":"2023-07-17T22:56:59.167Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 4"}
	{"level":"info","ts":"2023-07-17T22:56:59.167Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 4"}
	{"level":"info","ts":"2023-07-17T22:56:59.167Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 4"}
	{"level":"info","ts":"2023-07-17T22:56:59.169Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:kubernetes-upgrade-420000 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2023-07-17T22:56:59.169Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-07-17T22:56:59.169Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-07-17T22:56:59.169Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-07-17T22:56:59.169Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-07-17T22:56:59.170Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2023-07-17T22:56:59.170Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	
	* 
	* ==> kernel <==
	*  22:57:05 up  6:56,  0 users,  load average: 1.65, 1.64, 1.44
	Linux kubernetes-upgrade-420000 5.15.49-linuxkit-pr #1 SMP Thu May 25 07:17:40 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.2 LTS"
	
	* 
	* ==> kube-apiserver [24e819ae3a49] <==
	* I0717 22:57:00.077756       1 shared_informer.go:311] Waiting for caches to sync for configmaps
	I0717 22:57:00.082799       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
	I0717 22:57:00.082846       1 shared_informer.go:311] Waiting for caches to sync for cluster_authentication_trust_controller
	I0717 22:57:00.082884       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0717 22:57:00.083214       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0717 22:57:00.165948       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0717 22:57:00.166678       1 apf_controller.go:366] Running API Priority and Fairness config worker
	I0717 22:57:00.166741       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0717 22:57:00.166748       1 apf_controller.go:369] Running API Priority and Fairness periodic rebalancing process
	I0717 22:57:00.172345       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0717 22:57:00.173763       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0717 22:57:00.173818       1 aggregator.go:152] initial CRD sync complete...
	I0717 22:57:00.173823       1 autoregister_controller.go:141] Starting autoregister controller
	I0717 22:57:00.173827       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0717 22:57:00.173832       1 cache.go:39] Caches are synced for autoregister controller
	I0717 22:57:00.177978       1 shared_informer.go:318] Caches are synced for configmaps
	I0717 22:57:00.183076       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0717 22:57:00.245048       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0717 22:57:00.875797       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0717 22:57:01.074281       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0717 22:57:02.166082       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0717 22:57:02.363403       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0717 22:57:02.461645       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0717 22:57:02.503635       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0717 22:57:02.509105       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	* 
	* ==> kube-apiserver [86137de89163] <==
	* }. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 22:56:55.361360       1 logging.go:59] [core] [Channel #133 SubChannel #134] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 22:56:55.361392       1 logging.go:59] [core] [Channel #76 SubChannel #77] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 22:56:55.365865       1 logging.go:59] [core] [Channel #142 SubChannel #143] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	
	* 
	* ==> kube-controller-manager [86960f7b0bd6] <==
	* I0717 22:57:02.184373       1 controller.go:169] "Starting ephemeral volume controller"
	I0717 22:57:02.184390       1 shared_informer.go:311] Waiting for caches to sync for ephemeral
	I0717 22:57:02.191028       1 controllermanager.go:638] "Started controller" controller="podgc"
	I0717 22:57:02.191061       1 core.go:228] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes."
	I0717 22:57:02.191066       1 controllermanager.go:616] "Warning: skipping controller" controller="route"
	I0717 22:57:02.191477       1 gc_controller.go:103] Starting GC controller
	I0717 22:57:02.191513       1 shared_informer.go:311] Waiting for caches to sync for GC
	E0717 22:57:02.193521       1 core.go:213] "Failed to start cloud node lifecycle controller" err="no cloud provider provided"
	I0717 22:57:02.193618       1 controllermanager.go:616] "Warning: skipping controller" controller="cloud-node-lifecycle"
	I0717 22:57:02.245185       1 shared_informer.go:318] Caches are synced for tokens
	I0717 22:57:02.249893       1 controllermanager.go:638] "Started controller" controller="persistentvolume-binder"
	I0717 22:57:02.249998       1 pv_controller_base.go:323] "Starting persistent volume controller"
	I0717 22:57:02.250006       1 shared_informer.go:311] Waiting for caches to sync for persistent volume
	I0717 22:57:02.252048       1 controllermanager.go:638] "Started controller" controller="pv-protection"
	I0717 22:57:02.252144       1 pv_protection_controller.go:78] "Starting PV protection controller"
	I0717 22:57:02.252198       1 shared_informer.go:311] Waiting for caches to sync for PV protection
	I0717 22:57:02.259026       1 controllermanager.go:638] "Started controller" controller="endpointslice"
	I0717 22:57:02.259164       1 endpointslice_controller.go:252] Starting endpoint slice controller
	I0717 22:57:02.259196       1 shared_informer.go:311] Waiting for caches to sync for endpoint_slice
	I0717 22:57:02.267086       1 controllermanager.go:638] "Started controller" controller="replicationcontroller"
	I0717 22:57:02.267328       1 replica_set.go:201] "Starting controller" name="replicationcontroller"
	I0717 22:57:02.267376       1 shared_informer.go:311] Waiting for caches to sync for ReplicationController
	I0717 22:57:02.274035       1 controllermanager.go:638] "Started controller" controller="serviceaccount"
	I0717 22:57:02.274188       1 serviceaccounts_controller.go:111] "Starting service account controller"
	I0717 22:57:02.274231       1 shared_informer.go:311] Waiting for caches to sync for service account
	
	* 
	* ==> kube-controller-manager [96a6f5f2a73e] <==
	* I0717 22:56:51.696888       1 serving.go:348] Generated self-signed cert in-memory
	I0717 22:56:52.280392       1 controllermanager.go:187] "Starting" version="v1.27.3"
	I0717 22:56:52.280439       1 controllermanager.go:189] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 22:56:52.281811       1 secure_serving.go:210] Serving securely on 127.0.0.1:10257
	I0717 22:56:52.281867       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0717 22:56:52.281895       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0717 22:56:52.281950       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	
	* 
	* ==> kube-scheduler [7ff7e0d5f99c] <==
	* I0717 22:56:51.661879       1 serving.go:348] Generated self-signed cert in-memory
	I0717 22:56:53.857098       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.27.3"
	I0717 22:56:53.857141       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 22:56:53.864487       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I0717 22:56:53.864666       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0717 22:56:53.865106       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0717 22:56:53.865150       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0717 22:56:53.864624       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0717 22:56:53.865956       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0717 22:56:53.864642       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0717 22:56:53.866375       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0717 22:56:53.965277       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	I0717 22:56:53.966958       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0717 22:56:53.967016       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0717 22:56:55.364329       1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
	I0717 22:56:55.364392       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E0717 22:56:55.364504       1 scheduling_queue.go:1135] "Error while retrieving next pod from scheduling queue" err="scheduling queue is closed"
	I0717 22:56:55.364929       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0717 22:56:55.364946       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0717 22:56:55.365005       1 requestheader_controller.go:183] Shutting down RequestHeaderAuthRequestController
	E0717 22:56:55.365376       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kube-scheduler [c61f75f91f53] <==
	* I0717 22:56:58.193085       1 serving.go:348] Generated self-signed cert in-memory
	I0717 22:57:00.175697       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.27.3"
	I0717 22:57:00.175737       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 22:57:00.179070       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I0717 22:57:00.179109       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0717 22:57:00.179116       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0717 22:57:00.179127       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0717 22:57:00.179164       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0717 22:57:00.179203       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0717 22:57:00.180985       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0717 22:57:00.181026       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0717 22:57:00.279913       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	I0717 22:57:00.279913       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0717 22:57:00.280848       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	
	* 
	* ==> kubelet <==
	* Jul 17 22:56:57 kubernetes-upgrade-420000 kubelet[14993]: I0717 22:56:57.166364   14993 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/71a9d79d7632229b97f7fa161f19ade2-ca-certs\") pod \"kube-apiserver-kubernetes-upgrade-420000\" (UID: \"71a9d79d7632229b97f7fa161f19ade2\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-420000"
	Jul 17 22:56:57 kubernetes-upgrade-420000 kubelet[14993]: I0717 22:56:57.166501   14993 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9fa2ac5a0ea662804de9e9486c477a32-etc-ca-certificates\") pod \"kube-controller-manager-kubernetes-upgrade-420000\" (UID: \"9fa2ac5a0ea662804de9e9486c477a32\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-420000"
	Jul 17 22:56:57 kubernetes-upgrade-420000 kubelet[14993]: I0717 22:56:57.166628   14993 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9fa2ac5a0ea662804de9e9486c477a32-flexvolume-dir\") pod \"kube-controller-manager-kubernetes-upgrade-420000\" (UID: \"9fa2ac5a0ea662804de9e9486c477a32\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-420000"
	Jul 17 22:56:57 kubernetes-upgrade-420000 kubelet[14993]: I0717 22:56:57.166663   14993 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9fa2ac5a0ea662804de9e9486c477a32-k8s-certs\") pod \"kube-controller-manager-kubernetes-upgrade-420000\" (UID: \"9fa2ac5a0ea662804de9e9486c477a32\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-420000"
	Jul 17 22:56:57 kubernetes-upgrade-420000 kubelet[14993]: I0717 22:56:57.166687   14993 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9fa2ac5a0ea662804de9e9486c477a32-usr-local-share-ca-certificates\") pod \"kube-controller-manager-kubernetes-upgrade-420000\" (UID: \"9fa2ac5a0ea662804de9e9486c477a32\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-420000"
	Jul 17 22:56:57 kubernetes-upgrade-420000 kubelet[14993]: I0717 22:56:57.166714   14993 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/265afbc46517611b2f188b02802be27c-etcd-certs\") pod \"etcd-kubernetes-upgrade-420000\" (UID: \"265afbc46517611b2f188b02802be27c\") " pod="kube-system/etcd-kubernetes-upgrade-420000"
	Jul 17 22:56:57 kubernetes-upgrade-420000 kubelet[14993]: I0717 22:56:57.166732   14993 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/71a9d79d7632229b97f7fa161f19ade2-k8s-certs\") pod \"kube-apiserver-kubernetes-upgrade-420000\" (UID: \"71a9d79d7632229b97f7fa161f19ade2\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-420000"
	Jul 17 22:56:57 kubernetes-upgrade-420000 kubelet[14993]: I0717 22:56:57.166750   14993 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/71a9d79d7632229b97f7fa161f19ade2-usr-local-share-ca-certificates\") pod \"kube-apiserver-kubernetes-upgrade-420000\" (UID: \"71a9d79d7632229b97f7fa161f19ade2\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-420000"
	Jul 17 22:56:57 kubernetes-upgrade-420000 kubelet[14993]: I0717 22:56:57.166776   14993 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/71a9d79d7632229b97f7fa161f19ade2-usr-share-ca-certificates\") pod \"kube-apiserver-kubernetes-upgrade-420000\" (UID: \"71a9d79d7632229b97f7fa161f19ade2\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-420000"
	Jul 17 22:56:57 kubernetes-upgrade-420000 kubelet[14993]: I0717 22:56:57.166813   14993 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9fa2ac5a0ea662804de9e9486c477a32-kubeconfig\") pod \"kube-controller-manager-kubernetes-upgrade-420000\" (UID: \"9fa2ac5a0ea662804de9e9486c477a32\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-420000"
	Jul 17 22:56:57 kubernetes-upgrade-420000 kubelet[14993]: I0717 22:56:57.187956   14993 kubelet_node_status.go:70] "Attempting to register node" node="kubernetes-upgrade-420000"
	Jul 17 22:56:57 kubernetes-upgrade-420000 kubelet[14993]: E0717 22:56:57.188268   14993 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.76.2:8443: connect: connection refused" node="kubernetes-upgrade-420000"
	Jul 17 22:56:57 kubernetes-upgrade-420000 kubelet[14993]: I0717 22:56:57.294773   14993 scope.go:115] "RemoveContainer" containerID="04ce2f4205d82c3e4fb8e61e99850a58f03ee1c5f62dd71dc820327fa52f95d1"
	Jul 17 22:56:57 kubernetes-upgrade-420000 kubelet[14993]: I0717 22:56:57.349285   14993 scope.go:115] "RemoveContainer" containerID="86137de891636ad0e0b21e97c31848b7027e2767b6dc384c23888fa433e9c128"
	Jul 17 22:56:57 kubernetes-upgrade-420000 kubelet[14993]: I0717 22:56:57.358489   14993 scope.go:115] "RemoveContainer" containerID="96a6f5f2a73ea393b9b1584e56c71af347b3d1a0e02d8c81d469980cce2cbfe1"
	Jul 17 22:56:57 kubernetes-upgrade-420000 kubelet[14993]: I0717 22:56:57.365345   14993 scope.go:115] "RemoveContainer" containerID="7ff7e0d5f99c92517c2d587b9c0cb7cc3a3d6b87c5693b9603133b7f2277536d"
	Jul 17 22:56:57 kubernetes-upgrade-420000 kubelet[14993]: E0717 22:56:57.467138   14993 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-420000?timeout=10s\": dial tcp 192.168.76.2:8443: connect: connection refused" interval="800ms"
	Jul 17 22:56:57 kubernetes-upgrade-420000 kubelet[14993]: I0717 22:56:57.652644   14993 kubelet_node_status.go:70] "Attempting to register node" node="kubernetes-upgrade-420000"
	Jul 17 22:56:57 kubernetes-upgrade-420000 kubelet[14993]: E0717 22:56:57.652952   14993 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.76.2:8443: connect: connection refused" node="kubernetes-upgrade-420000"
	Jul 17 22:56:58 kubernetes-upgrade-420000 kubelet[14993]: I0717 22:56:58.460427   14993 kubelet_node_status.go:70] "Attempting to register node" node="kubernetes-upgrade-420000"
	Jul 17 22:57:00 kubernetes-upgrade-420000 kubelet[14993]: I0717 22:57:00.250290   14993 kubelet_node_status.go:108] "Node was previously registered" node="kubernetes-upgrade-420000"
	Jul 17 22:57:00 kubernetes-upgrade-420000 kubelet[14993]: I0717 22:57:00.250385   14993 kubelet_node_status.go:73] "Successfully registered node" node="kubernetes-upgrade-420000"
	Jul 17 22:57:00 kubernetes-upgrade-420000 kubelet[14993]: I0717 22:57:00.854087   14993 apiserver.go:52] "Watching apiserver"
	Jul 17 22:57:00 kubernetes-upgrade-420000 kubelet[14993]: I0717 22:57:00.865728   14993 desired_state_of_world_populator.go:153] "Finished populating initial desired state of world"
	Jul 17 22:57:00 kubernetes-upgrade-420000 kubelet[14993]: I0717 22:57:00.949669   14993 reconciler.go:41] "Reconciler: start to sync state"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-420000 -n kubernetes-upgrade-420000
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-420000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: storage-provisioner
helpers_test.go:274: ======> post-mortem[TestKubernetesUpgrade]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context kubernetes-upgrade-420000 describe pod storage-provisioner
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-420000 describe pod storage-provisioner: exit status 1 (59.719622ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context kubernetes-upgrade-420000 describe pod storage-provisioner: exit status 1
helpers_test.go:175: Cleaning up "kubernetes-upgrade-420000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p kubernetes-upgrade-420000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p kubernetes-upgrade-420000: (2.671544813s)
--- FAIL: TestKubernetesUpgrade (568.96s)

                                                
                                    
x
+
TestMissingContainerUpgrade (50.98s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:321: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.0.2374186925.exe start -p missing-upgrade-653000 --memory=2200 --driver=docker 
version_upgrade_test.go:321: (dbg) Non-zero exit: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.0.2374186925.exe start -p missing-upgrade-653000 --memory=2200 --driver=docker : exit status 70 (35.026852681s)

                                                
                                                
-- stdout --
	* [missing-upgrade-653000] minikube v1.9.0 on Darwin 13.4.1
	  - MINIKUBE_LOCATION=16899
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16899-76867/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16899-76867/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Pulling base image ...
	* Downloading Kubernetes v1.18.0 preload ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	! StartHost failed, but will try again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-07-17 22:47:10.093022263 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* Deleting "missing-upgrade-653000" in docker ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	* StartHost failed again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-07-17 22:47:24.230263348 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	  - Run: "minikube delete -p missing-upgrade-653000", then "minikube start -p missing-upgrade-653000 --alsologtostderr -v=1" to try again with more logging

                                                
                                                
-- /stdout --
** stderr ** 
	    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 186.90 KiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 2.05 MiB /    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 12.14 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 16.90 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 24.86 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 34.51 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 42.27 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 50.72 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 61.46 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 74.47 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 85.23 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 96.00 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
: 107.28 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 116.53 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 130.52 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 143.78 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 157.64 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 168.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 181.83 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 192.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 205.16 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 219.14 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 232.70 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 246.62 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 259.26 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.
lz4: 273.25 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 287.17 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 298.81 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 312.77 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 324.22 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 336.17 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 344.08 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 358.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 372.05 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 384.87 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 398.72 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 412.64 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 424.33 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.t
ar.lz4: 433.53 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 447.45 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 461.19 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 475.23 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 488.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 501.91 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 515.95 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 529.87 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 541.14 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 542.91 MiB* 
	X Unable to start VM after repeated tries. Please try {{'minikube delete' if possible: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-07-17 22:47:24.230263348 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:321: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.0.2374186925.exe start -p missing-upgrade-653000 --memory=2200 --driver=docker 
version_upgrade_test.go:321: (dbg) Non-zero exit: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.0.2374186925.exe start -p missing-upgrade-653000 --memory=2200 --driver=docker : exit status 70 (4.08442199s)

                                                
                                                
-- stdout --
	* [missing-upgrade-653000] minikube v1.9.0 on Darwin 13.4.1
	  - MINIKUBE_LOCATION=16899
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16899-76867/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16899-76867/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Updating the running docker "missing-upgrade-653000" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:321: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.0.2374186925.exe start -p missing-upgrade-653000 --memory=2200 --driver=docker 
version_upgrade_test.go:321: (dbg) Non-zero exit: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.0.2374186925.exe start -p missing-upgrade-653000 --memory=2200 --driver=docker : exit status 70 (4.241906993s)

                                                
                                                
-- stdout --
	* [missing-upgrade-653000] minikube v1.9.0 on Darwin 13.4.1
	  - MINIKUBE_LOCATION=16899
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16899-76867/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16899-76867/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Updating the running docker "missing-upgrade-653000" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:327: release start failed: exit status 70
panic.go:522: *** TestMissingContainerUpgrade FAILED at 2023-07-17 15:47:38.13396 -0700 PDT m=+2463.229369312
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMissingContainerUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect missing-upgrade-653000
helpers_test.go:235: (dbg) docker inspect missing-upgrade-653000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5ff8c2c152bcd367c1c6fbc53a40feae7ac7cfaf59ca928c85dc9ccf35fe223e",
	        "Created": "2023-07-17T22:47:18.139149367Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1114965,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-07-17T22:47:18.32656437Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:11589cdc9ef4b67a64cc243dd3cf013e81ad02bbed105fc37dc07aa272044680",
	        "ResolvConfPath": "/var/lib/docker/containers/5ff8c2c152bcd367c1c6fbc53a40feae7ac7cfaf59ca928c85dc9ccf35fe223e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5ff8c2c152bcd367c1c6fbc53a40feae7ac7cfaf59ca928c85dc9ccf35fe223e/hostname",
	        "HostsPath": "/var/lib/docker/containers/5ff8c2c152bcd367c1c6fbc53a40feae7ac7cfaf59ca928c85dc9ccf35fe223e/hosts",
	        "LogPath": "/var/lib/docker/containers/5ff8c2c152bcd367c1c6fbc53a40feae7ac7cfaf59ca928c85dc9ccf35fe223e/5ff8c2c152bcd367c1c6fbc53a40feae7ac7cfaf59ca928c85dc9ccf35fe223e-json.log",
	        "Name": "/missing-upgrade-653000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "missing-upgrade-653000:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/490a3213676408ad76c407c4e1a64d6f845c3a95b7176ba6d7846bfbb0427dbf-init/diff:/var/lib/docker/overlay2/6f79fcf1ae04c7315470ec130311770a1d5a1f09c9c016611ad483c8624a568c/diff:/var/lib/docker/overlay2/824d311c0f1a56d58a1b4de8d0d46c2c25458d301b90c38bb87793f004510773/diff:/var/lib/docker/overlay2/23f9e480bbe2b5e47902c250e9dcbf010afbdf61065cf22c306ba0406feb5016/diff:/var/lib/docker/overlay2/40a6b4863a53c49cb42b60a74dcc867d3b22055aeadb1acd6477fb476b42c8c4/diff:/var/lib/docker/overlay2/1e918fa9b271b3fa8d5cf84d6607ca2d421f8e27a238a1b09b7e989b0a0c9d6c/diff:/var/lib/docker/overlay2/43b32e81584664e728c478e05656242f76bc14c9092335f31f9c655d5d4b7d32/diff:/var/lib/docker/overlay2/50b96258598f5094ffbcb721c9226fc815cbb791d0a52acc146b9144fa132eb1/diff:/var/lib/docker/overlay2/9912d8aa578a55d3d854e85fb2c747ff454e142b9fba0446203653f2bfcfebf6/diff:/var/lib/docker/overlay2/fec59d5c2f0915ec67172bad6ff0580636c5cf30ac8f856fa52468d1e6e63eb8/diff:/var/lib/docker/overlay2/c64c0a
ca425c4e87fd598f1e176767c7587d40c04e1a418dd890e59476381def/diff:/var/lib/docker/overlay2/5b11f255860ccf7f8c12dcee584cdd6cf8749747563ca3d98dcb67a103f8876b/diff:/var/lib/docker/overlay2/f5e0502d23539f3d763856b84cc5929004b42c51b8ddcae1cc794c6e3f27cfd3/diff:/var/lib/docker/overlay2/f206036c73f93e71f2749ce2bdc2d5a05ae51031ad42fdd0851eb8b6305c95c0/diff:/var/lib/docker/overlay2/056325070bfcb7eab70071932a81d69bb8a78745bd783bf69c1f3aba45d8ad07/diff:/var/lib/docker/overlay2/506c189a7c5a2dd15dcb23866ea5b0de3f3cbfa45f8a5ed101b1da8cc01acd74/diff:/var/lib/docker/overlay2/a22f478f372890594a544a7667aff6bc1a4e11e024ffc62567c749235e429a49/diff:/var/lib/docker/overlay2/4d0b46e6475de6ab69177443c4e46a7d5285842f33cf8a1e08e77f234efc16b6/diff:/var/lib/docker/overlay2/21136419843b9dd031a7265c9796c123f4b7fc4a3eded9c5606126a076cd0c0e/diff:/var/lib/docker/overlay2/b4079f72b4fa546a22f2d285aa0df36e4efba9859314f5b77604b4d04b43cdcd/diff:/var/lib/docker/overlay2/b31b32e472f01811273fd8cc81dce6165b6336c168c1a0cb892f40cff012b826/diff:/var/lib/d
ocker/overlay2/829369828d47f4ae231abb0804d8da84c80120c46f306995bd9886cf4465aed0/diff",
	                "MergedDir": "/var/lib/docker/overlay2/490a3213676408ad76c407c4e1a64d6f845c3a95b7176ba6d7846bfbb0427dbf/merged",
	                "UpperDir": "/var/lib/docker/overlay2/490a3213676408ad76c407c4e1a64d6f845c3a95b7176ba6d7846bfbb0427dbf/diff",
	                "WorkDir": "/var/lib/docker/overlay2/490a3213676408ad76c407c4e1a64d6f845c3a95b7176ba6d7846bfbb0427dbf/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "missing-upgrade-653000",
	                "Source": "/var/lib/docker/volumes/missing-upgrade-653000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "missing-upgrade-653000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	                "container=docker"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "missing-upgrade-653000",
	                "name.minikube.sigs.k8s.io": "missing-upgrade-653000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7024fa81fbdb340277ccb8ec52f514e523993dc3448a15c3ca957aec9ea9e93d",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55413"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55411"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "55412"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/7024fa81fbdb",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "89d34fa94a3919721c74a8e88210b9eb7a3f638ddb9bbfb1cae8786e6de31043",
	            "Gateway": "172.17.0.1",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "172.17.0.2",
	            "IPPrefixLen": 16,
	            "IPv6Gateway": "",
	            "MacAddress": "02:42:ac:11:00:02",
	            "Networks": {
	                "bridge": {
	                    "IPAMConfig": null,
	                    "Links": null,
	                    "Aliases": null,
	                    "NetworkID": "ef7f43e58377b53217a2c20dab25ef139178c7d4a4cc2ff02958959170ac9e32",
	                    "EndpointID": "89d34fa94a3919721c74a8e88210b9eb7a3f638ddb9bbfb1cae8786e6de31043",
	                    "Gateway": "172.17.0.1",
	                    "IPAddress": "172.17.0.2",
	                    "IPPrefixLen": 16,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:ac:11:00:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p missing-upgrade-653000 -n missing-upgrade-653000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p missing-upgrade-653000 -n missing-upgrade-653000: exit status 6 (411.279963ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 15:47:38.586290   86409 status.go:415] kubeconfig endpoint: extract IP: "missing-upgrade-653000" does not appear in /Users/jenkins/minikube-integration/16899-76867/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "missing-upgrade-653000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "missing-upgrade-653000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p missing-upgrade-653000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p missing-upgrade-653000: (2.261306481s)
--- FAIL: TestMissingContainerUpgrade (50.98s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (11.17s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.31.0 on darwin
- MINIKUBE_LOCATION=16899
- KUBECONFIG=/Users/jenkins/minikube-integration/16899-76867/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2713404160/001
* Using the hyperkit driver based on user configuration
* Downloading driver docker-machine-driver-hyperkit:
! Unable to update hyperkit driver: download: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.31.0/docker-machine-driver-hyperkit?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.31.0/docker-machine-driver-hyperkit.sha256 Dst:/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2713404160/001/.minikube/bin/docker-machine-driver-hyperkit.download Pwd: Mode:2 Umask:---------- Detectors:[0x4b42ca8 0x4b42ca8 0x4b42ca8 0x4b42ca8 0x4b42ca8 0x4b42ca8 0x4b42ca8] Decompressors:map[bz2:0xc000128f78 gz:0xc000129050 tar:0xc000128fc0 tar.bz2:0xc000128fe0 tar.gz:0xc000129000 tar.xz:0xc000129030 tar.zst:0xc000129040 tbz2:0xc000128fe0 tgz:0xc000129000 txz:0xc000129030 tzst:0xc000129040 xz:0xc000129058 zip:0xc000129060 zst:0xc000129070] Getters:map[file:0xc00052dcd0 http:0xc000b6f0e0 https:0xc000b6f130] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: E
rror downloading checksum file: bad response code: 404
* Downloading VM boot image ...
* Starting control plane node minikube in cluster minikube
* Download complete!
driver_install_or_update_test.go:218: invalid driver version. expected: v1.31.0, got: v1.2.0
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (11.17s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (45.5s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:195: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.0.607819348.exe start -p stopped-upgrade-938000 --memory=2200 --vm-driver=docker 
version_upgrade_test.go:195: (dbg) Non-zero exit: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.0.607819348.exe start -p stopped-upgrade-938000 --memory=2200 --vm-driver=docker : exit status 70 (35.407347035s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-938000] minikube v1.9.0 on Darwin 13.4.1
	  - MINIKUBE_LOCATION=16899
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16899-76867/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	  - KUBECONFIG=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/legacy_kubeconfig2854872907
	* Using the docker driver based on user configuration
	* Pulling base image ...
	* Downloading Kubernetes v1.18.0 preload ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	! StartHost failed, but will try again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-07-17 22:49:27.919099560 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* Deleting "stopped-upgrade-938000" in docker ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	* StartHost failed again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-07-17 22:49:42.750100416 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	  - Run: "minikube delete -p stopped-upgrade-938000", then "minikube start -p stopped-upgrade-938000 --alsologtostderr -v=1" to try again with more logging

                                                
                                                
-- /stdout --
** stderr ** 
	    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 162.98 KiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 1.75 MiB /    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 12.62 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 26.30 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 40.00 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 53.83 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 67.46 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 80.74 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 91.78 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 104.69 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 118.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 131.81 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
: 145.47 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 159.22 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 173.02 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 186.69 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 200.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 210.61 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 223.39 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 237.05 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 251.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 264.41 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 272.33 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 286.19 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 296.56 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.
lz4: 310.30 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 324.05 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 337.91 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 351.58 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 365.31 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 379.06 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 392.77 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 406.47 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 420.20 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 433.33 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 439.67 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 453.39 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 467.12 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.t
ar.lz4: 480.81 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 494.58 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 508.28 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 522.01 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 535.76 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 542.91 MiB* 
	X Unable to start VM after repeated tries. Please try {{'minikube delete' if possible: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-07-17 22:49:42.750100416 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:195: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.0.607819348.exe start -p stopped-upgrade-938000 --memory=2200 --vm-driver=docker 
E0717 15:49:47.601964   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/skaffold-258000/client.crt: no such file or directory
version_upgrade_test.go:195: (dbg) Non-zero exit: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.0.607819348.exe start -p stopped-upgrade-938000 --memory=2200 --vm-driver=docker : exit status 70 (4.126621071s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-938000] minikube v1.9.0 on Darwin 13.4.1
	  - MINIKUBE_LOCATION=16899
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16899-76867/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	  - KUBECONFIG=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/legacy_kubeconfig1565248796
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Updating the running docker "stopped-upgrade-938000" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:195: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.0.607819348.exe start -p stopped-upgrade-938000 --memory=2200 --vm-driver=docker 
version_upgrade_test.go:195: (dbg) Non-zero exit: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube-v1.9.0.607819348.exe start -p stopped-upgrade-938000 --memory=2200 --vm-driver=docker : exit status 70 (4.067250606s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-938000] minikube v1.9.0 on Darwin 13.4.1
	  - MINIKUBE_LOCATION=16899
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16899-76867/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	  - KUBECONFIG=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/legacy_kubeconfig1757347405
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Updating the running docker "stopped-upgrade-938000" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:201: legacy v1.9.0 start failed: exit status 70
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (45.50s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (258.14s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p old-k8s-version-770000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p old-k8s-version-770000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0: exit status 109 (4m17.679612309s)

                                                
                                                
-- stdout --
	* [old-k8s-version-770000] minikube v1.31.0 on Darwin 13.4.1
	  - MINIKUBE_LOCATION=16899
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16899-76867/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16899-76867/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node old-k8s-version-770000 in cluster old-k8s-version-770000
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* Preparing Kubernetes v1.16.0 on Docker 24.0.4 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 16:01:24.542673   92606 out.go:296] Setting OutFile to fd 1 ...
	I0717 16:01:24.542862   92606 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 16:01:24.542868   92606 out.go:309] Setting ErrFile to fd 2...
	I0717 16:01:24.542873   92606 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 16:01:24.543050   92606 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/16899-76867/.minikube/bin
	I0717 16:01:24.544603   92606 out.go:303] Setting JSON to false
	I0717 16:01:24.564344   92606 start.go:128] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":25252,"bootTime":1689609632,"procs":439,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0717 16:01:24.564429   92606 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0717 16:01:24.586979   92606 out.go:177] * [old-k8s-version-770000] minikube v1.31.0 on Darwin 13.4.1
	I0717 16:01:24.628876   92606 notify.go:220] Checking for updates...
	I0717 16:01:24.628900   92606 out.go:177]   - MINIKUBE_LOCATION=16899
	I0717 16:01:24.672058   92606 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16899-76867/kubeconfig
	I0717 16:01:24.692709   92606 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0717 16:01:24.713942   92606 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 16:01:24.735096   92606 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16899-76867/.minikube
	I0717 16:01:24.755674   92606 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 16:01:24.777629   92606 config.go:182] Loaded profile config "calico-679000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0717 16:01:24.777802   92606 driver.go:373] Setting default libvirt URI to qemu:///system
	I0717 16:01:24.834581   92606 docker.go:121] docker version: linux-24.0.2:Docker Desktop 4.21.1 (114176)
	I0717 16:01:24.834711   92606 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 16:01:24.937600   92606 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:55 OomKillDisable:false NGoroutines:68 SystemTime:2023-07-17 23:01:24.924955543 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:5.15.49-linuxkit-pr OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexS
erverAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=built
in name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Pat
h:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.6] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Command line tool for Docker Scout Vendor:Docker Inc. Version:0.16.1]] Warnings:<n
il>}}
	I0717 16:01:24.979685   92606 out.go:177] * Using the docker driver based on user configuration
	I0717 16:01:25.000705   92606 start.go:298] selected driver: docker
	I0717 16:01:25.000731   92606 start.go:880] validating driver "docker" against <nil>
	I0717 16:01:25.000747   92606 start.go:891] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 16:01:25.004481   92606 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 16:01:25.107864   92606 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:55 OomKillDisable:false NGoroutines:68 SystemTime:2023-07-17 23:01:25.095953316 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:5.15.49-linuxkit-pr OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexS
erverAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=built
in name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Pat
h:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.6] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Command line tool for Docker Scout Vendor:Docker Inc. Version:0.16.1]] Warnings:<n
il>}}
	I0717 16:01:25.108043   92606 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0717 16:01:25.108256   92606 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 16:01:25.129913   92606 out.go:177] * Using Docker Desktop driver with root privileges
	I0717 16:01:25.150473   92606 cni.go:84] Creating CNI manager for ""
	I0717 16:01:25.150511   92606 cni.go:168] CNI unnecessary in this configuration, recommending no CNI
	I0717 16:01:25.150529   92606 start_flags.go:319] config:
	{Name:old-k8s-version-770000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-770000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocke
t: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 16:01:25.193619   92606 out.go:177] * Starting control plane node old-k8s-version-770000 in cluster old-k8s-version-770000
	I0717 16:01:25.214729   92606 cache.go:122] Beginning downloading kic base image for docker with docker
	I0717 16:01:25.236690   92606 out.go:177] * Pulling base image ...
	I0717 16:01:25.294716   92606 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0717 16:01:25.294776   92606 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon
	I0717 16:01:25.294834   92606 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16899-76867/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0717 16:01:25.294861   92606 cache.go:57] Caching tarball of preloaded images
	I0717 16:01:25.295108   92606 preload.go:174] Found /Users/jenkins/minikube-integration/16899-76867/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0717 16:01:25.295138   92606 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0717 16:01:25.296233   92606 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/old-k8s-version-770000/config.json ...
	I0717 16:01:25.296439   92606 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/old-k8s-version-770000/config.json: {Name:mkff572bcce20dd159d8de0c14cbfe6bb39d473b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 16:01:25.345673   92606 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon, skipping pull
	I0717 16:01:25.345693   92606 cache.go:145] gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 exists in daemon, skipping load
	I0717 16:01:25.345713   92606 cache.go:195] Successfully downloaded all kic artifacts
	I0717 16:01:25.345752   92606 start.go:365] acquiring machines lock for old-k8s-version-770000: {Name:mk0f9163ab3562db295835b9e526369b56772523 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 16:01:25.345951   92606 start.go:369] acquired machines lock for "old-k8s-version-770000" in 186.726µs
	I0717 16:01:25.345979   92606 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-770000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-770000 Namespace:default APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0717 16:01:25.346074   92606 start.go:125] createHost starting for "" (driver="docker")
	I0717 16:01:25.388813   92606 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0717 16:01:25.389181   92606 start.go:159] libmachine.API.Create for "old-k8s-version-770000" (driver="docker")
	I0717 16:01:25.389226   92606 client.go:168] LocalClient.Create starting
	I0717 16:01:25.389424   92606 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16899-76867/.minikube/certs/ca.pem
	I0717 16:01:25.389497   92606 main.go:141] libmachine: Decoding PEM data...
	I0717 16:01:25.389539   92606 main.go:141] libmachine: Parsing certificate...
	I0717 16:01:25.389669   92606 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16899-76867/.minikube/certs/cert.pem
	I0717 16:01:25.389721   92606 main.go:141] libmachine: Decoding PEM data...
	I0717 16:01:25.389738   92606 main.go:141] libmachine: Parsing certificate...
	I0717 16:01:25.390653   92606 cli_runner.go:164] Run: docker network inspect old-k8s-version-770000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0717 16:01:25.439953   92606 cli_runner.go:211] docker network inspect old-k8s-version-770000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0717 16:01:25.440057   92606 network_create.go:281] running [docker network inspect old-k8s-version-770000] to gather additional debugging logs...
	I0717 16:01:25.440074   92606 cli_runner.go:164] Run: docker network inspect old-k8s-version-770000
	W0717 16:01:25.491159   92606 cli_runner.go:211] docker network inspect old-k8s-version-770000 returned with exit code 1
	I0717 16:01:25.491234   92606 network_create.go:284] error running [docker network inspect old-k8s-version-770000]: docker network inspect old-k8s-version-770000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network old-k8s-version-770000 not found
	I0717 16:01:25.491250   92606 network_create.go:286] output of [docker network inspect old-k8s-version-770000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network old-k8s-version-770000 not found
	
	** /stderr **
	I0717 16:01:25.491370   92606 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0717 16:01:25.544461   92606 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0717 16:01:25.544828   92606 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000dc07c0}
	I0717 16:01:25.544849   92606 network_create.go:123] attempt to create docker network old-k8s-version-770000 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 65535 ...
	I0717 16:01:25.544928   92606 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-770000 old-k8s-version-770000
	W0717 16:01:25.594696   92606 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-770000 old-k8s-version-770000 returned with exit code 1
	W0717 16:01:25.594728   92606 network_create.go:148] failed to create docker network old-k8s-version-770000 192.168.58.0/24 with gateway 192.168.58.1 and mtu of 65535: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-770000 old-k8s-version-770000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0717 16:01:25.594747   92606 network_create.go:115] failed to create docker network old-k8s-version-770000 192.168.58.0/24, will retry: subnet is taken
	I0717 16:01:25.596081   92606 network.go:212] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0717 16:01:25.596399   92606 network.go:209] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00070f800}
	I0717 16:01:25.596412   92606 network_create.go:123] attempt to create docker network old-k8s-version-770000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 65535 ...
	I0717 16:01:25.596477   92606 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-770000 old-k8s-version-770000
	W0717 16:01:25.646102   92606 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-770000 old-k8s-version-770000 returned with exit code 1
	W0717 16:01:25.646141   92606 network_create.go:148] failed to create docker network old-k8s-version-770000 192.168.67.0/24 with gateway 192.168.67.1 and mtu of 65535: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-770000 old-k8s-version-770000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0717 16:01:25.646157   92606 network_create.go:115] failed to create docker network old-k8s-version-770000 192.168.67.0/24, will retry: subnet is taken
	I0717 16:01:25.647727   92606 network.go:212] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0717 16:01:25.648052   92606 network.go:209] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000e8df40}
	I0717 16:01:25.648069   92606 network_create.go:123] attempt to create docker network old-k8s-version-770000 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 65535 ...
	I0717 16:01:25.648134   92606 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-770000 old-k8s-version-770000
	I0717 16:01:25.730400   92606 network_create.go:107] docker network old-k8s-version-770000 192.168.76.0/24 created
	I0717 16:01:25.730447   92606 kic.go:117] calculated static IP "192.168.76.2" for the "old-k8s-version-770000" container
	I0717 16:01:25.730574   92606 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0717 16:01:25.780443   92606 cli_runner.go:164] Run: docker volume create old-k8s-version-770000 --label name.minikube.sigs.k8s.io=old-k8s-version-770000 --label created_by.minikube.sigs.k8s.io=true
	I0717 16:01:25.830567   92606 oci.go:103] Successfully created a docker volume old-k8s-version-770000
	I0717 16:01:25.830689   92606 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-770000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-770000 --entrypoint /usr/bin/test -v old-k8s-version-770000:/var gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -d /var/lib
	I0717 16:01:26.326123   92606 oci.go:107] Successfully prepared a docker volume old-k8s-version-770000
	I0717 16:01:26.326162   92606 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0717 16:01:26.326178   92606 kic.go:190] Starting extracting preloaded images to volume ...
	I0717 16:01:26.326296   92606 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/16899-76867/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-770000:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -I lz4 -xf /preloaded.tar -C /extractDir
	I0717 16:01:29.146536   92606 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/16899-76867/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-770000:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -I lz4 -xf /preloaded.tar -C /extractDir: (2.820027546s)
	I0717 16:01:29.146566   92606 kic.go:199] duration metric: took 2.820233 seconds to extract preloaded images to volume
	I0717 16:01:29.146679   92606 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0717 16:01:29.248415   92606 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-770000 --name old-k8s-version-770000 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-770000 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-770000 --network old-k8s-version-770000 --ip 192.168.76.2 --volume old-k8s-version-770000:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631
	I0717 16:01:29.526563   92606 cli_runner.go:164] Run: docker container inspect old-k8s-version-770000 --format={{.State.Running}}
	I0717 16:01:29.585979   92606 cli_runner.go:164] Run: docker container inspect old-k8s-version-770000 --format={{.State.Status}}
	I0717 16:01:29.641218   92606 cli_runner.go:164] Run: docker exec old-k8s-version-770000 stat /var/lib/dpkg/alternatives/iptables
	I0717 16:01:29.741999   92606 oci.go:144] the created container "old-k8s-version-770000" has a running status.
	I0717 16:01:29.742048   92606 kic.go:221] Creating ssh key for kic: /Users/jenkins/minikube-integration/16899-76867/.minikube/machines/old-k8s-version-770000/id_rsa...
	I0717 16:01:29.863646   92606 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/16899-76867/.minikube/machines/old-k8s-version-770000/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0717 16:01:29.931585   92606 cli_runner.go:164] Run: docker container inspect old-k8s-version-770000 --format={{.State.Status}}
	I0717 16:01:29.984971   92606 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0717 16:01:29.984993   92606 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-770000 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0717 16:01:30.083316   92606 cli_runner.go:164] Run: docker container inspect old-k8s-version-770000 --format={{.State.Status}}
	I0717 16:01:30.137601   92606 machine.go:88] provisioning docker machine ...
	I0717 16:01:30.137647   92606 ubuntu.go:169] provisioning hostname "old-k8s-version-770000"
	I0717 16:01:30.137756   92606 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-770000
	I0717 16:01:30.191940   92606 main.go:141] libmachine: Using SSH client type: native
	I0717 16:01:30.192336   92606 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140d2c0] 0x1410360 <nil>  [] 0s} 127.0.0.1 57123 <nil> <nil>}
	I0717 16:01:30.192347   92606 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-770000 && echo "old-k8s-version-770000" | sudo tee /etc/hostname
	I0717 16:01:30.333426   92606 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-770000
	
	I0717 16:01:30.333514   92606 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-770000
	I0717 16:01:30.385244   92606 main.go:141] libmachine: Using SSH client type: native
	I0717 16:01:30.385586   92606 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140d2c0] 0x1410360 <nil>  [] 0s} 127.0.0.1 57123 <nil> <nil>}
	I0717 16:01:30.385601   92606 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-770000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-770000/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-770000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 16:01:30.513862   92606 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 16:01:30.513885   92606 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/16899-76867/.minikube CaCertPath:/Users/jenkins/minikube-integration/16899-76867/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/16899-76867/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/16899-76867/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/16899-76867/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/16899-76867/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/16899-76867/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/16899-76867/.minikube}
	I0717 16:01:30.513914   92606 ubuntu.go:177] setting up certificates
	I0717 16:01:30.513924   92606 provision.go:83] configureAuth start
	I0717 16:01:30.514006   92606 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-770000
	I0717 16:01:30.564855   92606 provision.go:138] copyHostCerts
	I0717 16:01:30.564967   92606 exec_runner.go:144] found /Users/jenkins/minikube-integration/16899-76867/.minikube/key.pem, removing ...
	I0717 16:01:30.564976   92606 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/16899-76867/.minikube/key.pem
	I0717 16:01:30.565103   92606 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16899-76867/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/16899-76867/.minikube/key.pem (1675 bytes)
	I0717 16:01:30.565312   92606 exec_runner.go:144] found /Users/jenkins/minikube-integration/16899-76867/.minikube/ca.pem, removing ...
	I0717 16:01:30.565318   92606 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/16899-76867/.minikube/ca.pem
	I0717 16:01:30.565387   92606 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16899-76867/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/16899-76867/.minikube/ca.pem (1078 bytes)
	I0717 16:01:30.565535   92606 exec_runner.go:144] found /Users/jenkins/minikube-integration/16899-76867/.minikube/cert.pem, removing ...
	I0717 16:01:30.565541   92606 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/16899-76867/.minikube/cert.pem
	I0717 16:01:30.565661   92606 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16899-76867/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/16899-76867/.minikube/cert.pem (1123 bytes)
	I0717 16:01:30.565791   92606 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/16899-76867/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/16899-76867/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/16899-76867/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-770000 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube old-k8s-version-770000]
	I0717 16:01:30.689553   92606 provision.go:172] copyRemoteCerts
	I0717 16:01:30.689664   92606 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 16:01:30.689714   92606 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-770000
	I0717 16:01:30.743149   92606 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57123 SSHKeyPath:/Users/jenkins/minikube-integration/16899-76867/.minikube/machines/old-k8s-version-770000/id_rsa Username:docker}
	I0717 16:01:30.837825   92606 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 16:01:30.860510   92606 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0717 16:01:30.882397   92606 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 16:01:30.904603   92606 provision.go:86] duration metric: configureAuth took 390.645426ms
	I0717 16:01:30.904618   92606 ubuntu.go:193] setting minikube options for container-runtime
	I0717 16:01:30.904766   92606 config.go:182] Loaded profile config "old-k8s-version-770000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0717 16:01:30.904874   92606 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-770000
	I0717 16:01:30.957345   92606 main.go:141] libmachine: Using SSH client type: native
	I0717 16:01:30.957706   92606 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140d2c0] 0x1410360 <nil>  [] 0s} 127.0.0.1 57123 <nil> <nil>}
	I0717 16:01:30.957723   92606 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0717 16:01:31.086411   92606 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0717 16:01:31.086436   92606 ubuntu.go:71] root file system type: overlay
	I0717 16:01:31.086527   92606 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0717 16:01:31.086621   92606 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-770000
	I0717 16:01:31.137493   92606 main.go:141] libmachine: Using SSH client type: native
	I0717 16:01:31.137846   92606 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140d2c0] 0x1410360 <nil>  [] 0s} 127.0.0.1 57123 <nil> <nil>}
	I0717 16:01:31.137898   92606 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0717 16:01:31.276827   92606 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0717 16:01:31.276933   92606 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-770000
	I0717 16:01:31.328350   92606 main.go:141] libmachine: Using SSH client type: native
	I0717 16:01:31.328708   92606 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140d2c0] 0x1410360 <nil>  [] 0s} 127.0.0.1 57123 <nil> <nil>}
	I0717 16:01:31.328722   92606 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0717 16:01:31.993924   92606 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2023-07-07 14:50:55.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-07-17 23:01:31.273481749 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0717 16:01:31.993948   92606 machine.go:91] provisioned docker machine in 1.85624136s
	I0717 16:01:31.993962   92606 client.go:171] LocalClient.Create took 6.604384544s
	I0717 16:01:31.993987   92606 start.go:167] duration metric: libmachine.API.Create for "old-k8s-version-770000" took 6.604464031s
	I0717 16:01:31.994014   92606 start.go:300] post-start starting for "old-k8s-version-770000" (driver="docker")
	I0717 16:01:31.994027   92606 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 16:01:31.994097   92606 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 16:01:31.994160   92606 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-770000
	I0717 16:01:32.045886   92606 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57123 SSHKeyPath:/Users/jenkins/minikube-integration/16899-76867/.minikube/machines/old-k8s-version-770000/id_rsa Username:docker}
	I0717 16:01:32.138516   92606 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 16:01:32.142692   92606 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0717 16:01:32.142714   92606 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0717 16:01:32.142722   92606 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0717 16:01:32.142727   92606 info.go:137] Remote host: Ubuntu 22.04.2 LTS
	I0717 16:01:32.142736   92606 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16899-76867/.minikube/addons for local assets ...
	I0717 16:01:32.142821   92606 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16899-76867/.minikube/files for local assets ...
	I0717 16:01:32.143002   92606 filesync.go:149] local asset: /Users/jenkins/minikube-integration/16899-76867/.minikube/files/etc/ssl/certs/773242.pem -> 773242.pem in /etc/ssl/certs
	I0717 16:01:32.143182   92606 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 16:01:32.152208   92606 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/files/etc/ssl/certs/773242.pem --> /etc/ssl/certs/773242.pem (1708 bytes)
	I0717 16:01:32.174511   92606 start.go:303] post-start completed in 180.476259ms
	I0717 16:01:32.175045   92606 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-770000
	I0717 16:01:32.227654   92606 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/old-k8s-version-770000/config.json ...
	I0717 16:01:32.228094   92606 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 16:01:32.228156   92606 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-770000
	I0717 16:01:32.278725   92606 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57123 SSHKeyPath:/Users/jenkins/minikube-integration/16899-76867/.minikube/machines/old-k8s-version-770000/id_rsa Username:docker}
	I0717 16:01:32.368244   92606 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0717 16:01:32.373504   92606 start.go:128] duration metric: createHost completed in 7.027054995s
	I0717 16:01:32.373521   92606 start.go:83] releasing machines lock for "old-k8s-version-770000", held for 7.027198537s
	I0717 16:01:32.373626   92606 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-770000
	I0717 16:01:32.424854   92606 ssh_runner.go:195] Run: cat /version.json
	I0717 16:01:32.424873   92606 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 16:01:32.424936   92606 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-770000
	I0717 16:01:32.424953   92606 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-770000
	I0717 16:01:32.481001   92606 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57123 SSHKeyPath:/Users/jenkins/minikube-integration/16899-76867/.minikube/machines/old-k8s-version-770000/id_rsa Username:docker}
	I0717 16:01:32.484890   92606 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57123 SSHKeyPath:/Users/jenkins/minikube-integration/16899-76867/.minikube/machines/old-k8s-version-770000/id_rsa Username:docker}
	I0717 16:01:32.677685   92606 ssh_runner.go:195] Run: systemctl --version
	I0717 16:01:32.683121   92606 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0717 16:01:32.688598   92606 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0717 16:01:32.713113   92606 cni.go:236] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0717 16:01:32.713177   92606 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0717 16:01:32.729607   92606 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0717 16:01:32.745444   92606 cni.go:314] configured [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0717 16:01:32.745462   92606 start.go:466] detecting cgroup driver to use...
	I0717 16:01:32.745477   92606 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0717 16:01:32.745593   92606 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 16:01:32.761564   92606 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.1"|' /etc/containerd/config.toml"
	I0717 16:01:32.771578   92606 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0717 16:01:32.781421   92606 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0717 16:01:32.781481   92606 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0717 16:01:32.791352   92606 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0717 16:01:32.801376   92606 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0717 16:01:32.811460   92606 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0717 16:01:32.821149   92606 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 16:01:32.830538   92606 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0717 16:01:32.840533   92606 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 16:01:32.849588   92606 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 16:01:32.857874   92606 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 16:01:32.932274   92606 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0717 16:01:33.002780   92606 start.go:466] detecting cgroup driver to use...
	I0717 16:01:33.002801   92606 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0717 16:01:33.002872   92606 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0717 16:01:33.016008   92606 cruntime.go:276] skipping containerd shutdown because we are bound to it
	I0717 16:01:33.016088   92606 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0717 16:01:33.028550   92606 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 16:01:33.046695   92606 ssh_runner.go:195] Run: which cri-dockerd
	I0717 16:01:33.051698   92606 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0717 16:01:33.085814   92606 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0717 16:01:33.104660   92606 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0717 16:01:33.205547   92606 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0717 16:01:33.295720   92606 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0717 16:01:33.295737   92606 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0717 16:01:33.313589   92606 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 16:01:33.384030   92606 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0717 16:01:33.657882   92606 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0717 16:01:33.683993   92606 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0717 16:01:33.765797   92606 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 24.0.4 ...
	I0717 16:01:33.765885   92606 cli_runner.go:164] Run: docker exec -t old-k8s-version-770000 dig +short host.docker.internal
	I0717 16:01:33.896251   92606 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I0717 16:01:33.896391   92606 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I0717 16:01:33.902404   92606 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 16:01:33.917594   92606 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-770000
	I0717 16:01:33.972987   92606 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0717 16:01:33.973064   92606 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0717 16:01:33.995367   92606 docker.go:636] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0717 16:01:33.995387   92606 docker.go:642] registry.k8s.io/kube-apiserver:v1.16.0 wasn't preloaded
	I0717 16:01:33.995437   92606 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0717 16:01:34.005190   92606 ssh_runner.go:195] Run: which lz4
	I0717 16:01:34.009853   92606 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0717 16:01:34.015163   92606 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 16:01:34.015225   92606 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (369789069 bytes)
	I0717 16:01:39.803332   92606 docker.go:600] Took 5.793350 seconds to copy over tarball
	I0717 16:01:39.803436   92606 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 16:01:42.394194   92606 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.590665838s)
	I0717 16:01:42.394208   92606 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 16:01:42.451968   92606 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0717 16:01:42.462476   92606 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2499 bytes)
	I0717 16:01:42.481568   92606 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 16:01:42.563842   92606 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0717 16:01:43.046539   92606 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0717 16:01:43.069150   92606 docker.go:636] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0717 16:01:43.069169   92606 docker.go:642] registry.k8s.io/kube-apiserver:v1.16.0 wasn't preloaded
	I0717 16:01:43.069177   92606 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0717 16:01:43.074793   92606 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 16:01:43.074793   92606 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0717 16:01:43.075299   92606 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0717 16:01:43.075336   92606 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0717 16:01:43.075378   92606 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0717 16:01:43.075431   92606 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0717 16:01:43.076199   92606 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0717 16:01:43.076279   92606 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0717 16:01:43.080802   92606 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0717 16:01:43.081245   92606 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0717 16:01:43.082081   92606 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0717 16:01:43.082358   92606 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 16:01:43.082550   92606 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0717 16:01:43.082577   92606 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0717 16:01:43.082735   92606 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0717 16:01:43.084337   92606 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0717 16:01:44.308300   92606 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0717 16:01:44.331211   92606 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0717 16:01:44.331248   92606 docker.go:316] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0717 16:01:44.331314   92606 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.3.15-0
	I0717 16:01:44.352260   92606 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/16899-76867/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0717 16:01:44.455104   92606 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0717 16:01:44.477932   92606 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0717 16:01:44.477966   92606 docker.go:316] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0717 16:01:44.478028   92606 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.16.0
	I0717 16:01:44.501402   92606 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/16899-76867/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0717 16:01:44.588611   92606 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0717 16:01:44.613005   92606 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0717 16:01:44.613038   92606 docker.go:316] Removing image: registry.k8s.io/pause:3.1
	I0717 16:01:44.613092   92606 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.1
	I0717 16:01:44.636011   92606 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/16899-76867/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0717 16:01:45.109346   92606 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0717 16:01:45.132409   92606 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0717 16:01:45.132444   92606 docker.go:316] Removing image: registry.k8s.io/coredns:1.6.2
	I0717 16:01:45.132509   92606 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.6.2
	I0717 16:01:45.154147   92606 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/16899-76867/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0717 16:01:45.421993   92606 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 16:01:45.424304   92606 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0717 16:01:45.455206   92606 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0717 16:01:45.455243   92606 docker.go:316] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0717 16:01:45.455316   92606 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0717 16:01:45.479191   92606 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/16899-76867/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0717 16:01:45.755079   92606 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0717 16:01:45.777515   92606 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0717 16:01:45.777558   92606 docker.go:316] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0717 16:01:45.777630   92606 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0717 16:01:45.801332   92606 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/16899-76867/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0717 16:01:46.350607   92606 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0717 16:01:46.370043   92606 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0717 16:01:46.370076   92606 docker.go:316] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0717 16:01:46.370148   92606 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0717 16:01:46.390027   92606 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/16899-76867/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0717 16:01:46.390073   92606 cache_images.go:92] LoadImages completed in 3.320798361s
	W0717 16:01:46.390123   92606 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/16899-76867/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0: no such file or directory
	X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/16899-76867/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0: no such file or directory
	I0717 16:01:46.390195   92606 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0717 16:01:46.446838   92606 cni.go:84] Creating CNI manager for ""
	I0717 16:01:46.446855   92606 cni.go:168] CNI unnecessary in this configuration, recommending no CNI
	I0717 16:01:46.446872   92606 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0717 16:01:46.446895   92606 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-770000 NodeName:old-k8s-version-770000 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0717 16:01:46.447001   92606 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "old-k8s-version-770000"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-770000
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.76.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 16:01:46.447070   92606 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=old-k8s-version-770000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-770000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0717 16:01:46.447135   92606 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0717 16:01:46.457801   92606 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 16:01:46.457862   92606 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 16:01:46.469148   92606 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (348 bytes)
	I0717 16:01:46.488380   92606 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 16:01:46.507522   92606 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2174 bytes)
	I0717 16:01:46.528271   92606 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0717 16:01:46.533254   92606 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 16:01:46.546751   92606 certs.go:56] Setting up /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/old-k8s-version-770000 for IP: 192.168.76.2
	I0717 16:01:46.546784   92606 certs.go:190] acquiring lock for shared ca certs: {Name:mk8dc1f2afa352f9c2168154d4ab47beda1b6a37 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 16:01:46.546989   92606 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/16899-76867/.minikube/ca.key
	I0717 16:01:46.547084   92606 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/16899-76867/.minikube/proxy-client-ca.key
	I0717 16:01:46.547154   92606 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/old-k8s-version-770000/client.key
	I0717 16:01:46.547175   92606 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/old-k8s-version-770000/client.crt with IP's: []
	I0717 16:01:46.662246   92606 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/old-k8s-version-770000/client.crt ...
	I0717 16:01:46.662270   92606 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/old-k8s-version-770000/client.crt: {Name:mk381354864b59b07e58991d1d9d8a70c4426bab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 16:01:46.662712   92606 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/old-k8s-version-770000/client.key ...
	I0717 16:01:46.662722   92606 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/old-k8s-version-770000/client.key: {Name:mkaf6c5fc183fa44b46c75514c440f7970af7573 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 16:01:46.662978   92606 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/old-k8s-version-770000/apiserver.key.31bdca25
	I0717 16:01:46.663012   92606 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/old-k8s-version-770000/apiserver.crt.31bdca25 with IP's: [192.168.76.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0717 16:01:46.736349   92606 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/old-k8s-version-770000/apiserver.crt.31bdca25 ...
	I0717 16:01:46.736367   92606 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/old-k8s-version-770000/apiserver.crt.31bdca25: {Name:mk3968f378081303c69947e75cea3b03adaa7ba7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 16:01:46.736691   92606 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/old-k8s-version-770000/apiserver.key.31bdca25 ...
	I0717 16:01:46.736700   92606 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/old-k8s-version-770000/apiserver.key.31bdca25: {Name:mk810bdeb5511fa3db2c0c958a7bffafc4aff436 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 16:01:46.736915   92606 certs.go:337] copying /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/old-k8s-version-770000/apiserver.crt.31bdca25 -> /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/old-k8s-version-770000/apiserver.crt
	I0717 16:01:46.737115   92606 certs.go:341] copying /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/old-k8s-version-770000/apiserver.key.31bdca25 -> /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/old-k8s-version-770000/apiserver.key
	I0717 16:01:46.737287   92606 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/old-k8s-version-770000/proxy-client.key
	I0717 16:01:46.737354   92606 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/old-k8s-version-770000/proxy-client.crt with IP's: []
	I0717 16:01:46.781045   92606 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/old-k8s-version-770000/proxy-client.crt ...
	I0717 16:01:46.781061   92606 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/old-k8s-version-770000/proxy-client.crt: {Name:mkfddc52176926356badd66cf60b995a8c93394d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 16:01:46.781390   92606 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/old-k8s-version-770000/proxy-client.key ...
	I0717 16:01:46.781399   92606 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/old-k8s-version-770000/proxy-client.key: {Name:mk8267b910eec7822db3634557d1804094a71d0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 16:01:46.781833   92606 certs.go:437] found cert: /Users/jenkins/minikube-integration/16899-76867/.minikube/certs/Users/jenkins/minikube-integration/16899-76867/.minikube/certs/77324.pem (1338 bytes)
	W0717 16:01:46.781888   92606 certs.go:433] ignoring /Users/jenkins/minikube-integration/16899-76867/.minikube/certs/Users/jenkins/minikube-integration/16899-76867/.minikube/certs/77324_empty.pem, impossibly tiny 0 bytes
	I0717 16:01:46.781905   92606 certs.go:437] found cert: /Users/jenkins/minikube-integration/16899-76867/.minikube/certs/Users/jenkins/minikube-integration/16899-76867/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 16:01:46.781941   92606 certs.go:437] found cert: /Users/jenkins/minikube-integration/16899-76867/.minikube/certs/Users/jenkins/minikube-integration/16899-76867/.minikube/certs/ca.pem (1078 bytes)
	I0717 16:01:46.781973   92606 certs.go:437] found cert: /Users/jenkins/minikube-integration/16899-76867/.minikube/certs/Users/jenkins/minikube-integration/16899-76867/.minikube/certs/cert.pem (1123 bytes)
	I0717 16:01:46.782008   92606 certs.go:437] found cert: /Users/jenkins/minikube-integration/16899-76867/.minikube/certs/Users/jenkins/minikube-integration/16899-76867/.minikube/certs/key.pem (1675 bytes)
	I0717 16:01:46.782082   92606 certs.go:437] found cert: /Users/jenkins/minikube-integration/16899-76867/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/16899-76867/.minikube/files/etc/ssl/certs/773242.pem (1708 bytes)
	I0717 16:01:46.782657   92606 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/old-k8s-version-770000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0717 16:01:46.805346   92606 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/old-k8s-version-770000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 16:01:46.829912   92606 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/old-k8s-version-770000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 16:01:46.854243   92606 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/old-k8s-version-770000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0717 16:01:46.878203   92606 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 16:01:46.903705   92606 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 16:01:46.928736   92606 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 16:01:46.955293   92606 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0717 16:01:46.980214   92606 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 16:01:47.005053   92606 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/certs/77324.pem --> /usr/share/ca-certificates/77324.pem (1338 bytes)
	I0717 16:01:47.031458   92606 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/files/etc/ssl/certs/773242.pem --> /usr/share/ca-certificates/773242.pem (1708 bytes)
	I0717 16:01:47.058211   92606 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 16:01:47.077744   92606 ssh_runner.go:195] Run: openssl version
	I0717 16:01:47.084658   92606 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/77324.pem && ln -fs /usr/share/ca-certificates/77324.pem /etc/ssl/certs/77324.pem"
	I0717 16:01:47.095943   92606 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/77324.pem
	I0717 16:01:47.101644   92606 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jul 17 22:13 /usr/share/ca-certificates/77324.pem
	I0717 16:01:47.101713   92606 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/77324.pem
	I0717 16:01:47.110161   92606 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/77324.pem /etc/ssl/certs/51391683.0"
	I0717 16:01:47.121518   92606 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/773242.pem && ln -fs /usr/share/ca-certificates/773242.pem /etc/ssl/certs/773242.pem"
	I0717 16:01:47.133664   92606 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/773242.pem
	I0717 16:01:47.138463   92606 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jul 17 22:13 /usr/share/ca-certificates/773242.pem
	I0717 16:01:47.138524   92606 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/773242.pem
	I0717 16:01:47.145895   92606 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/773242.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 16:01:47.156109   92606 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 16:01:47.166062   92606 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 16:01:47.170876   92606 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jul 17 22:07 /usr/share/ca-certificates/minikubeCA.pem
	I0717 16:01:47.170928   92606 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 16:01:47.178212   92606 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 16:01:47.187971   92606 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0717 16:01:47.192545   92606 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0717 16:01:47.192591   92606 kubeadm.go:404] StartCluster: {Name:old-k8s-version-770000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-770000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs
:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 16:01:47.192699   92606 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0717 16:01:47.213075   92606 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 16:01:47.224484   92606 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 16:01:47.240494   92606 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0717 16:01:47.240567   92606 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 16:01:47.250845   92606 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 16:01:47.250880   92606 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0717 16:01:47.311048   92606 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0717 16:01:47.311270   92606 kubeadm.go:322] [preflight] Running pre-flight checks
	I0717 16:01:47.595202   92606 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 16:01:47.595313   92606 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 16:01:47.595393   92606 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 16:01:47.796129   92606 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 16:01:47.796978   92606 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 16:01:47.803893   92606 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0717 16:01:47.887163   92606 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 16:01:47.908783   92606 out.go:204]   - Generating certificates and keys ...
	I0717 16:01:47.908873   92606 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0717 16:01:47.908961   92606 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0717 16:01:48.107400   92606 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0717 16:01:48.172101   92606 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0717 16:01:48.352959   92606 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0717 16:01:48.533240   92606 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0717 16:01:48.623978   92606 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0717 16:01:48.624092   92606 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [old-k8s-version-770000 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0717 16:01:48.701963   92606 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0717 16:01:48.702139   92606 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [old-k8s-version-770000 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0717 16:01:48.989598   92606 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0717 16:01:49.136647   92606 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0717 16:01:49.216225   92606 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0717 16:01:49.216427   92606 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 16:01:49.415632   92606 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 16:01:49.645563   92606 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 16:01:49.861489   92606 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 16:01:50.022126   92606 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 16:01:50.022826   92606 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 16:01:50.064539   92606 out.go:204]   - Booting up control plane ...
	I0717 16:01:50.064696   92606 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 16:01:50.064792   92606 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 16:01:50.064906   92606 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 16:01:50.065020   92606 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 16:01:50.065213   92606 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0717 16:02:30.034609   92606 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0717 16:02:30.036011   92606 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 16:02:30.036272   92606 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 16:02:35.036474   92606 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 16:02:35.036648   92606 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 16:02:45.037397   92606 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 16:02:45.037558   92606 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 16:03:05.038653   92606 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 16:03:05.038815   92606 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 16:03:45.040508   92606 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 16:03:45.040718   92606 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 16:03:45.040727   92606 kubeadm.go:322] 
	I0717 16:03:45.040755   92606 kubeadm.go:322] Unfortunately, an error has occurred:
	I0717 16:03:45.040799   92606 kubeadm.go:322] 	timed out waiting for the condition
	I0717 16:03:45.040826   92606 kubeadm.go:322] 
	I0717 16:03:45.040869   92606 kubeadm.go:322] This error is likely caused by:
	I0717 16:03:45.040897   92606 kubeadm.go:322] 	- The kubelet is not running
	I0717 16:03:45.041013   92606 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0717 16:03:45.041028   92606 kubeadm.go:322] 
	I0717 16:03:45.041153   92606 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0717 16:03:45.041204   92606 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0717 16:03:45.041246   92606 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0717 16:03:45.041261   92606 kubeadm.go:322] 
	I0717 16:03:45.041383   92606 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0717 16:03:45.041497   92606 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0717 16:03:45.041641   92606 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0717 16:03:45.041719   92606 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0717 16:03:45.041833   92606 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0717 16:03:45.041936   92606 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0717 16:03:45.044408   92606 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0717 16:03:45.044511   92606 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0717 16:03:45.044698   92606 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.4. Latest validated version: 18.09
	I0717 16:03:45.044810   92606 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 16:03:45.044972   92606 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0717 16:03:45.045105   92606 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	W0717 16:03:45.045253   92606 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [old-k8s-version-770000 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [old-k8s-version-770000 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.4. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [old-k8s-version-770000 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [old-k8s-version-770000 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.4. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0717 16:03:45.045315   92606 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0717 16:03:45.485166   92606 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 16:03:45.497894   92606 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0717 16:03:45.497963   92606 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 16:03:45.508170   92606 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 16:03:45.508194   92606 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0717 16:03:45.682722   92606 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0717 16:03:45.682850   92606 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0717 16:03:45.739572   92606 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.4. Latest validated version: 18.09
	I0717 16:03:45.819796   92606 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 16:05:41.553902   92606 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0717 16:05:41.553982   92606 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0717 16:05:41.557090   92606 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0717 16:05:41.557137   92606 kubeadm.go:322] [preflight] Running pre-flight checks
	I0717 16:05:41.557196   92606 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 16:05:41.557278   92606 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 16:05:41.557363   92606 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 16:05:41.557460   92606 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 16:05:41.557531   92606 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 16:05:41.557580   92606 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0717 16:05:41.557634   92606 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 16:05:41.579159   92606 out.go:204]   - Generating certificates and keys ...
	I0717 16:05:41.579266   92606 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0717 16:05:41.579407   92606 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0717 16:05:41.579540   92606 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0717 16:05:41.579672   92606 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0717 16:05:41.579805   92606 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0717 16:05:41.579893   92606 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0717 16:05:41.580003   92606 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0717 16:05:41.580097   92606 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0717 16:05:41.580204   92606 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0717 16:05:41.580305   92606 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0717 16:05:41.580362   92606 kubeadm.go:322] [certs] Using the existing "sa" key
	I0717 16:05:41.580448   92606 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 16:05:41.580522   92606 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 16:05:41.580622   92606 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 16:05:41.580737   92606 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 16:05:41.580826   92606 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 16:05:41.580948   92606 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 16:05:41.641928   92606 out.go:204]   - Booting up control plane ...
	I0717 16:05:41.642105   92606 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 16:05:41.642241   92606 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 16:05:41.642349   92606 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 16:05:41.642491   92606 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 16:05:41.642759   92606 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0717 16:05:41.642850   92606 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0717 16:05:41.642964   92606 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 16:05:41.643302   92606 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 16:05:41.643431   92606 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 16:05:41.643716   92606 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 16:05:41.643850   92606 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 16:05:41.644149   92606 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 16:05:41.644265   92606 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 16:05:41.644548   92606 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 16:05:41.644669   92606 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 16:05:41.644871   92606 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 16:05:41.644885   92606 kubeadm.go:322] 
	I0717 16:05:41.644925   92606 kubeadm.go:322] Unfortunately, an error has occurred:
	I0717 16:05:41.644973   92606 kubeadm.go:322] 	timed out waiting for the condition
	I0717 16:05:41.644984   92606 kubeadm.go:322] 
	I0717 16:05:41.645022   92606 kubeadm.go:322] This error is likely caused by:
	I0717 16:05:41.645056   92606 kubeadm.go:322] 	- The kubelet is not running
	I0717 16:05:41.645165   92606 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0717 16:05:41.645174   92606 kubeadm.go:322] 
	I0717 16:05:41.645291   92606 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0717 16:05:41.645330   92606 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0717 16:05:41.645365   92606 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0717 16:05:41.645373   92606 kubeadm.go:322] 
	I0717 16:05:41.645489   92606 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0717 16:05:41.645586   92606 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0717 16:05:41.645688   92606 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0717 16:05:41.645742   92606 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0717 16:05:41.645831   92606 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0717 16:05:41.645883   92606 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0717 16:05:41.645911   92606 kubeadm.go:406] StartCluster complete in 3m54.450422661s
	I0717 16:05:41.646034   92606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 16:05:41.665901   92606 logs.go:284] 0 containers: []
	W0717 16:05:41.665915   92606 logs.go:286] No container was found matching "kube-apiserver"
	I0717 16:05:41.665983   92606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 16:05:41.685039   92606 logs.go:284] 0 containers: []
	W0717 16:05:41.685053   92606 logs.go:286] No container was found matching "etcd"
	I0717 16:05:41.685123   92606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 16:05:41.705372   92606 logs.go:284] 0 containers: []
	W0717 16:05:41.705384   92606 logs.go:286] No container was found matching "coredns"
	I0717 16:05:41.705458   92606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 16:05:41.725851   92606 logs.go:284] 0 containers: []
	W0717 16:05:41.725875   92606 logs.go:286] No container was found matching "kube-scheduler"
	I0717 16:05:41.725945   92606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 16:05:41.746212   92606 logs.go:284] 0 containers: []
	W0717 16:05:41.746228   92606 logs.go:286] No container was found matching "kube-proxy"
	I0717 16:05:41.746304   92606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 16:05:41.766612   92606 logs.go:284] 0 containers: []
	W0717 16:05:41.766630   92606 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 16:05:41.766708   92606 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 16:05:41.789636   92606 logs.go:284] 0 containers: []
	W0717 16:05:41.789654   92606 logs.go:286] No container was found matching "kindnet"
	I0717 16:05:41.789668   92606 logs.go:123] Gathering logs for container status ...
	I0717 16:05:41.789679   92606 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 16:05:41.845301   92606 logs.go:123] Gathering logs for kubelet ...
	I0717 16:05:41.845322   92606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 16:05:41.885248   92606 logs.go:123] Gathering logs for dmesg ...
	I0717 16:05:41.885262   92606 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 16:05:41.899988   92606 logs.go:123] Gathering logs for describe nodes ...
	I0717 16:05:41.900003   92606 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 16:05:41.955719   92606 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 16:05:41.955735   92606 logs.go:123] Gathering logs for Docker ...
	I0717 16:05:41.955741   92606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	W0717 16:05:41.972304   92606 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.4. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0717 16:05:41.972324   92606 out.go:239] * 
	* 
	W0717 16:05:41.972363   92606 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.4. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.4. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0717 16:05:41.972375   92606 out.go:239] * 
	* 
	W0717 16:05:41.972998   92606 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 16:05:42.057804   92606 out.go:177] 
	W0717 16:05:42.099686   92606 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.4. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.4. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0717 16:05:42.099765   92606 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0717 16:05:42.099797   92606 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0717 16:05:42.121903   92606 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-amd64 start -p old-k8s-version-770000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/FirstStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-770000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-770000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6129a8b881bae9c9b14658d603684185fa98d048ad62c6ede03346a49e6e2b56",
	        "Created": "2023-07-17T23:01:29.298658175Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1218635,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-07-17T23:01:29.516403393Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6cc01e6091959400f260dc442708e7c71630b58dab1f7c344cb00926bd84950",
	        "ResolvConfPath": "/var/lib/docker/containers/6129a8b881bae9c9b14658d603684185fa98d048ad62c6ede03346a49e6e2b56/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6129a8b881bae9c9b14658d603684185fa98d048ad62c6ede03346a49e6e2b56/hostname",
	        "HostsPath": "/var/lib/docker/containers/6129a8b881bae9c9b14658d603684185fa98d048ad62c6ede03346a49e6e2b56/hosts",
	        "LogPath": "/var/lib/docker/containers/6129a8b881bae9c9b14658d603684185fa98d048ad62c6ede03346a49e6e2b56/6129a8b881bae9c9b14658d603684185fa98d048ad62c6ede03346a49e6e2b56-json.log",
	        "Name": "/old-k8s-version-770000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "old-k8s-version-770000:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-770000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/a038e5269af2118ee927b0485208b6f3b1d1f1a742907462c43ed3f30ca09e24-init/diff:/var/lib/docker/overlay2/388817d1807139a2b5fe2987f16fc65d58f6720a0b0343097a59eb837a278a0e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a038e5269af2118ee927b0485208b6f3b1d1f1a742907462c43ed3f30ca09e24/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a038e5269af2118ee927b0485208b6f3b1d1f1a742907462c43ed3f30ca09e24/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a038e5269af2118ee927b0485208b6f3b1d1f1a742907462c43ed3f30ca09e24/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-770000",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-770000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-770000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-770000",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-770000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c07258dac6be990577a4e3f1e6cbd9a4759194f33d0f96dce83e5e8558aeddb0",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57123"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57124"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57125"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57126"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57127"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/c07258dac6be",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-770000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "6129a8b881ba",
	                        "old-k8s-version-770000"
	                    ],
	                    "NetworkID": "e0b81b03df244d0caf05aedc1b790fca29cd02fdbba810fc90a219bab32afcb3",
	                    "EndpointID": "4fd6bd6d37711f3e0f856445ac094c78f36caf611e81b86031855dfec1573cf4",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-770000 -n old-k8s-version-770000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-770000 -n old-k8s-version-770000: exit status 6 (360.591979ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 16:05:42.635115   93736 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-770000" does not appear in /Users/jenkins/minikube-integration/16899-76867/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-770000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (258.14s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.88s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-770000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-770000 create -f testdata/busybox.yaml: exit status 1 (35.208233ms)

                                                
                                                
** stderr ** 
	error: no openapi getter

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-770000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-770000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-770000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6129a8b881bae9c9b14658d603684185fa98d048ad62c6ede03346a49e6e2b56",
	        "Created": "2023-07-17T23:01:29.298658175Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1218635,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-07-17T23:01:29.516403393Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6cc01e6091959400f260dc442708e7c71630b58dab1f7c344cb00926bd84950",
	        "ResolvConfPath": "/var/lib/docker/containers/6129a8b881bae9c9b14658d603684185fa98d048ad62c6ede03346a49e6e2b56/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6129a8b881bae9c9b14658d603684185fa98d048ad62c6ede03346a49e6e2b56/hostname",
	        "HostsPath": "/var/lib/docker/containers/6129a8b881bae9c9b14658d603684185fa98d048ad62c6ede03346a49e6e2b56/hosts",
	        "LogPath": "/var/lib/docker/containers/6129a8b881bae9c9b14658d603684185fa98d048ad62c6ede03346a49e6e2b56/6129a8b881bae9c9b14658d603684185fa98d048ad62c6ede03346a49e6e2b56-json.log",
	        "Name": "/old-k8s-version-770000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "old-k8s-version-770000:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-770000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/a038e5269af2118ee927b0485208b6f3b1d1f1a742907462c43ed3f30ca09e24-init/diff:/var/lib/docker/overlay2/388817d1807139a2b5fe2987f16fc65d58f6720a0b0343097a59eb837a278a0e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a038e5269af2118ee927b0485208b6f3b1d1f1a742907462c43ed3f30ca09e24/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a038e5269af2118ee927b0485208b6f3b1d1f1a742907462c43ed3f30ca09e24/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a038e5269af2118ee927b0485208b6f3b1d1f1a742907462c43ed3f30ca09e24/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-770000",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-770000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-770000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-770000",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-770000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c07258dac6be990577a4e3f1e6cbd9a4759194f33d0f96dce83e5e8558aeddb0",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57123"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57124"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57125"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57126"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57127"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/c07258dac6be",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-770000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "6129a8b881ba",
	                        "old-k8s-version-770000"
	                    ],
	                    "NetworkID": "e0b81b03df244d0caf05aedc1b790fca29cd02fdbba810fc90a219bab32afcb3",
	                    "EndpointID": "4fd6bd6d37711f3e0f856445ac094c78f36caf611e81b86031855dfec1573cf4",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-770000 -n old-k8s-version-770000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-770000 -n old-k8s-version-770000: exit status 6 (364.312256ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 16:05:43.086391   93749 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-770000" does not appear in /Users/jenkins/minikube-integration/16899-76867/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-770000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-770000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-770000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6129a8b881bae9c9b14658d603684185fa98d048ad62c6ede03346a49e6e2b56",
	        "Created": "2023-07-17T23:01:29.298658175Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1218635,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-07-17T23:01:29.516403393Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6cc01e6091959400f260dc442708e7c71630b58dab1f7c344cb00926bd84950",
	        "ResolvConfPath": "/var/lib/docker/containers/6129a8b881bae9c9b14658d603684185fa98d048ad62c6ede03346a49e6e2b56/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6129a8b881bae9c9b14658d603684185fa98d048ad62c6ede03346a49e6e2b56/hostname",
	        "HostsPath": "/var/lib/docker/containers/6129a8b881bae9c9b14658d603684185fa98d048ad62c6ede03346a49e6e2b56/hosts",
	        "LogPath": "/var/lib/docker/containers/6129a8b881bae9c9b14658d603684185fa98d048ad62c6ede03346a49e6e2b56/6129a8b881bae9c9b14658d603684185fa98d048ad62c6ede03346a49e6e2b56-json.log",
	        "Name": "/old-k8s-version-770000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "old-k8s-version-770000:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-770000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/a038e5269af2118ee927b0485208b6f3b1d1f1a742907462c43ed3f30ca09e24-init/diff:/var/lib/docker/overlay2/388817d1807139a2b5fe2987f16fc65d58f6720a0b0343097a59eb837a278a0e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a038e5269af2118ee927b0485208b6f3b1d1f1a742907462c43ed3f30ca09e24/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a038e5269af2118ee927b0485208b6f3b1d1f1a742907462c43ed3f30ca09e24/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a038e5269af2118ee927b0485208b6f3b1d1f1a742907462c43ed3f30ca09e24/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-770000",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-770000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-770000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-770000",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-770000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c07258dac6be990577a4e3f1e6cbd9a4759194f33d0f96dce83e5e8558aeddb0",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57123"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57124"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57125"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57126"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57127"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/c07258dac6be",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-770000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "6129a8b881ba",
	                        "old-k8s-version-770000"
	                    ],
	                    "NetworkID": "e0b81b03df244d0caf05aedc1b790fca29cd02fdbba810fc90a219bab32afcb3",
	                    "EndpointID": "4fd6bd6d37711f3e0f856445ac094c78f36caf611e81b86031855dfec1573cf4",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-770000 -n old-k8s-version-770000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-770000 -n old-k8s-version-770000: exit status 6 (373.84111ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 16:05:43.512697   93761 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-770000" does not appear in /Users/jenkins/minikube-integration/16899-76867/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-770000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.88s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (84.92s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p old-k8s-version-770000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0717 16:05:45.598023   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/kindnet-679000/client.crt: no such file or directory
E0717 16:05:45.798648   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/functional-554000/client.crt: no such file or directory
E0717 16:05:46.741976   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/bridge-679000/client.crt: no such file or directory
E0717 16:05:49.091483   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/kubenet-679000/client.crt: no such file or directory
E0717 16:05:50.453543   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/false-679000/client.crt: no such file or directory
E0717 16:05:50.458840   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/false-679000/client.crt: no such file or directory
E0717 16:05:50.470223   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/false-679000/client.crt: no such file or directory
E0717 16:05:50.490388   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/false-679000/client.crt: no such file or directory
E0717 16:05:50.532647   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/false-679000/client.crt: no such file or directory
E0717 16:05:50.614832   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/false-679000/client.crt: no such file or directory
E0717 16:05:50.775005   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/false-679000/client.crt: no such file or directory
E0717 16:05:51.096192   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/false-679000/client.crt: no such file or directory
E0717 16:05:51.736353   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/false-679000/client.crt: no such file or directory
E0717 16:05:53.016526   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/false-679000/client.crt: no such file or directory
E0717 16:05:55.267777   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/flannel-679000/client.crt: no such file or directory
E0717 16:05:55.576809   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/false-679000/client.crt: no such file or directory
E0717 16:05:57.001630   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/custom-flannel-679000/client.crt: no such file or directory
E0717 16:06:00.697991   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/false-679000/client.crt: no such file or directory
E0717 16:06:08.766806   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/calico-679000/client.crt: no such file or directory
E0717 16:06:08.772868   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/calico-679000/client.crt: no such file or directory
E0717 16:06:08.782954   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/calico-679000/client.crt: no such file or directory
E0717 16:06:08.803102   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/calico-679000/client.crt: no such file or directory
E0717 16:06:08.843411   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/calico-679000/client.crt: no such file or directory
E0717 16:06:08.924669   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/calico-679000/client.crt: no such file or directory
E0717 16:06:09.086240   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/calico-679000/client.crt: no such file or directory
E0717 16:06:09.408440   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/calico-679000/client.crt: no such file or directory
E0717 16:06:10.049973   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/calico-679000/client.crt: no such file or directory
E0717 16:06:10.938409   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/false-679000/client.crt: no such file or directory
E0717 16:06:11.330380   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/calico-679000/client.crt: no such file or directory
E0717 16:06:13.892183   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/calico-679000/client.crt: no such file or directory
E0717 16:06:19.013406   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/calico-679000/client.crt: no such file or directory
E0717 16:06:29.254149   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/calico-679000/client.crt: no such file or directory
E0717 16:06:31.420933   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/false-679000/client.crt: no such file or directory
E0717 16:06:48.522144   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/enable-default-cni-679000/client.crt: no such file or directory
E0717 16:06:49.734605   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/calico-679000/client.crt: no such file or directory
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons enable metrics-server -p old-k8s-version-770000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m24.463624569s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	unable to recognize "/etc/kubernetes/addons/metrics-apiservice.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-deployment.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-service.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	]
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-darwin-amd64 addons enable metrics-server -p old-k8s-version-770000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-770000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-770000 describe deploy/metrics-server -n kube-system: exit status 1 (35.954136ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-770000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-770000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-770000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-770000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6129a8b881bae9c9b14658d603684185fa98d048ad62c6ede03346a49e6e2b56",
	        "Created": "2023-07-17T23:01:29.298658175Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1218635,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-07-17T23:01:29.516403393Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6cc01e6091959400f260dc442708e7c71630b58dab1f7c344cb00926bd84950",
	        "ResolvConfPath": "/var/lib/docker/containers/6129a8b881bae9c9b14658d603684185fa98d048ad62c6ede03346a49e6e2b56/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6129a8b881bae9c9b14658d603684185fa98d048ad62c6ede03346a49e6e2b56/hostname",
	        "HostsPath": "/var/lib/docker/containers/6129a8b881bae9c9b14658d603684185fa98d048ad62c6ede03346a49e6e2b56/hosts",
	        "LogPath": "/var/lib/docker/containers/6129a8b881bae9c9b14658d603684185fa98d048ad62c6ede03346a49e6e2b56/6129a8b881bae9c9b14658d603684185fa98d048ad62c6ede03346a49e6e2b56-json.log",
	        "Name": "/old-k8s-version-770000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "old-k8s-version-770000:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-770000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/a038e5269af2118ee927b0485208b6f3b1d1f1a742907462c43ed3f30ca09e24-init/diff:/var/lib/docker/overlay2/388817d1807139a2b5fe2987f16fc65d58f6720a0b0343097a59eb837a278a0e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a038e5269af2118ee927b0485208b6f3b1d1f1a742907462c43ed3f30ca09e24/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a038e5269af2118ee927b0485208b6f3b1d1f1a742907462c43ed3f30ca09e24/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a038e5269af2118ee927b0485208b6f3b1d1f1a742907462c43ed3f30ca09e24/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-770000",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-770000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-770000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-770000",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-770000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c07258dac6be990577a4e3f1e6cbd9a4759194f33d0f96dce83e5e8558aeddb0",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57123"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57124"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57125"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57126"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57127"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/c07258dac6be",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-770000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "6129a8b881ba",
	                        "old-k8s-version-770000"
	                    ],
	                    "NetworkID": "e0b81b03df244d0caf05aedc1b790fca29cd02fdbba810fc90a219bab32afcb3",
	                    "EndpointID": "4fd6bd6d37711f3e0f856445ac094c78f36caf611e81b86031855dfec1573cf4",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-770000 -n old-k8s-version-770000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-770000 -n old-k8s-version-770000: exit status 6 (362.867549ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 16:07:08.430994   93805 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-770000" does not appear in /Users/jenkins/minikube-integration/16899-76867/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-770000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (84.92s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (508.38s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p old-k8s-version-770000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0
E0717 16:07:11.014024   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/kubenet-679000/client.crt: no such file or directory
E0717 16:07:12.383664   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/false-679000/client.crt: no such file or directory
E0717 16:07:16.215016   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/enable-default-cni-679000/client.crt: no such file or directory
E0717 16:07:18.922896   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/custom-flannel-679000/client.crt: no such file or directory
E0717 16:07:30.696070   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/calico-679000/client.crt: no such file or directory
E0717 16:08:01.753342   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/kindnet-679000/client.crt: no such file or directory
E0717 16:08:02.897650   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/bridge-679000/client.crt: no such file or directory
E0717 16:08:25.682919   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/skaffold-258000/client.crt: no such file or directory
E0717 16:08:29.440183   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/kindnet-679000/client.crt: no such file or directory
E0717 16:08:30.583960   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/bridge-679000/client.crt: no such file or directory
E0717 16:08:34.306329   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/false-679000/client.crt: no such file or directory
E0717 16:08:52.617217   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/calico-679000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p old-k8s-version-770000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0: exit status 109 (8m25.765247752s)

                                                
                                                
-- stdout --
	* [old-k8s-version-770000] minikube v1.31.0 on Darwin 13.4.1
	  - MINIKUBE_LOCATION=16899
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16899-76867/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16899-76867/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.27.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.27.3
	* Using the docker driver based on existing profile
	* Starting control plane node old-k8s-version-770000 in cluster old-k8s-version-770000
	* Pulling base image ...
	* Restarting existing docker container for "old-k8s-version-770000" ...
	* Preparing Kubernetes v1.16.0 on Docker 24.0.4 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 16:07:10.423245   93835 out.go:296] Setting OutFile to fd 1 ...
	I0717 16:07:10.423414   93835 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 16:07:10.423420   93835 out.go:309] Setting ErrFile to fd 2...
	I0717 16:07:10.423424   93835 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 16:07:10.423609   93835 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/16899-76867/.minikube/bin
	I0717 16:07:10.425024   93835 out.go:303] Setting JSON to false
	I0717 16:07:10.444188   93835 start.go:128] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":25598,"bootTime":1689609632,"procs":429,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0717 16:07:10.444286   93835 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0717 16:07:10.466451   93835 out.go:177] * [old-k8s-version-770000] minikube v1.31.0 on Darwin 13.4.1
	I0717 16:07:10.509538   93835 out.go:177]   - MINIKUBE_LOCATION=16899
	I0717 16:07:10.509536   93835 notify.go:220] Checking for updates...
	I0717 16:07:10.531331   93835 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16899-76867/kubeconfig
	I0717 16:07:10.552282   93835 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0717 16:07:10.573125   93835 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 16:07:10.594481   93835 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16899-76867/.minikube
	I0717 16:07:10.616392   93835 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 16:07:10.637628   93835 config.go:182] Loaded profile config "old-k8s-version-770000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0717 16:07:10.661184   93835 out.go:177] * Kubernetes 1.27.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.27.3
	I0717 16:07:10.682322   93835 driver.go:373] Setting default libvirt URI to qemu:///system
	I0717 16:07:10.740216   93835 docker.go:121] docker version: linux-24.0.2:Docker Desktop 4.21.1 (114176)
	I0717 16:07:10.740354   93835 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 16:07:10.842133   93835 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:56 OomKillDisable:false NGoroutines:68 SystemTime:2023-07-17 23:07:10.831047922 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:5.15.49-linuxkit-pr OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexS
erverAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=built
in name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Pat
h:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.6] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Command line tool for Docker Scout Vendor:Docker Inc. Version:0.16.1]] Warnings:<n
il>}}
	I0717 16:07:10.864096   93835 out.go:177] * Using the docker driver based on existing profile
	I0717 16:07:10.907580   93835 start.go:298] selected driver: docker
	I0717 16:07:10.907602   93835 start.go:880] validating driver "docker" against &{Name:old-k8s-version-770000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-770000 Namespace:default APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mount
IP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 16:07:10.907720   93835 start.go:891] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 16:07:10.910429   93835 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 16:07:11.057737   93835 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:56 OomKillDisable:false NGoroutines:68 SystemTime:2023-07-17 23:07:11.015177335 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:5.15.49-linuxkit-pr OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexS
erverAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=built
in name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Pat
h:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.6] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Command line tool for Docker Scout Vendor:Docker Inc. Version:0.16.1]] Warnings:<n
il>}}
	I0717 16:07:11.058046   93835 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 16:07:11.058078   93835 cni.go:84] Creating CNI manager for ""
	I0717 16:07:11.058121   93835 cni.go:168] CNI unnecessary in this configuration, recommending no CNI
	I0717 16:07:11.058140   93835 start_flags.go:319] config:
	{Name:old-k8s-version-770000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-770000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocke
t: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 16:07:11.079872   93835 out.go:177] * Starting control plane node old-k8s-version-770000 in cluster old-k8s-version-770000
	I0717 16:07:11.154893   93835 cache.go:122] Beginning downloading kic base image for docker with docker
	I0717 16:07:11.176992   93835 out.go:177] * Pulling base image ...
	I0717 16:07:11.220642   93835 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0717 16:07:11.220671   93835 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon
	I0717 16:07:11.220760   93835 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16899-76867/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0717 16:07:11.220794   93835 cache.go:57] Caching tarball of preloaded images
	I0717 16:07:11.221016   93835 preload.go:174] Found /Users/jenkins/minikube-integration/16899-76867/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0717 16:07:11.221040   93835 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0717 16:07:11.221763   93835 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/old-k8s-version-770000/config.json ...
	I0717 16:07:11.272096   93835 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon, skipping pull
	I0717 16:07:11.272119   93835 cache.go:145] gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 exists in daemon, skipping load
	I0717 16:07:11.272139   93835 cache.go:195] Successfully downloaded all kic artifacts
	I0717 16:07:11.272179   93835 start.go:365] acquiring machines lock for old-k8s-version-770000: {Name:mk0f9163ab3562db295835b9e526369b56772523 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 16:07:11.272288   93835 start.go:369] acquired machines lock for "old-k8s-version-770000" in 81.526µs
	I0717 16:07:11.272324   93835 start.go:96] Skipping create...Using existing machine configuration
	I0717 16:07:11.272331   93835 fix.go:54] fixHost starting: 
	I0717 16:07:11.272552   93835 cli_runner.go:164] Run: docker container inspect old-k8s-version-770000 --format={{.State.Status}}
	I0717 16:07:11.321679   93835 fix.go:102] recreateIfNeeded on old-k8s-version-770000: state=Stopped err=<nil>
	W0717 16:07:11.321711   93835 fix.go:128] unexpected machine state, will restart: <nil>
	I0717 16:07:11.343485   93835 out.go:177] * Restarting existing docker container for "old-k8s-version-770000" ...
	I0717 16:07:11.365558   93835 cli_runner.go:164] Run: docker start old-k8s-version-770000
	I0717 16:07:11.609137   93835 cli_runner.go:164] Run: docker container inspect old-k8s-version-770000 --format={{.State.Status}}
	I0717 16:07:11.664641   93835 kic.go:426] container "old-k8s-version-770000" state is running.
	I0717 16:07:11.665270   93835 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-770000
	I0717 16:07:11.725310   93835 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/old-k8s-version-770000/config.json ...
	I0717 16:07:11.725845   93835 machine.go:88] provisioning docker machine ...
	I0717 16:07:11.725892   93835 ubuntu.go:169] provisioning hostname "old-k8s-version-770000"
	I0717 16:07:11.725976   93835 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-770000
	I0717 16:07:11.788979   93835 main.go:141] libmachine: Using SSH client type: native
	I0717 16:07:11.789544   93835 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140d2c0] 0x1410360 <nil>  [] 0s} 127.0.0.1 57348 <nil> <nil>}
	I0717 16:07:11.789563   93835 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-770000 && echo "old-k8s-version-770000" | sudo tee /etc/hostname
	I0717 16:07:11.790723   93835 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0717 16:07:14.931331   93835 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-770000
	
	I0717 16:07:14.931436   93835 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-770000
	I0717 16:07:14.980094   93835 main.go:141] libmachine: Using SSH client type: native
	I0717 16:07:14.980441   93835 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140d2c0] 0x1410360 <nil>  [] 0s} 127.0.0.1 57348 <nil> <nil>}
	I0717 16:07:14.980455   93835 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-770000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-770000/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-770000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 16:07:15.109519   93835 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 16:07:15.109539   93835 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/16899-76867/.minikube CaCertPath:/Users/jenkins/minikube-integration/16899-76867/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/16899-76867/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/16899-76867/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/16899-76867/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/16899-76867/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/16899-76867/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/16899-76867/.minikube}
	I0717 16:07:15.109566   93835 ubuntu.go:177] setting up certificates
	I0717 16:07:15.109573   93835 provision.go:83] configureAuth start
	I0717 16:07:15.109654   93835 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-770000
	I0717 16:07:15.159024   93835 provision.go:138] copyHostCerts
	I0717 16:07:15.159136   93835 exec_runner.go:144] found /Users/jenkins/minikube-integration/16899-76867/.minikube/ca.pem, removing ...
	I0717 16:07:15.159146   93835 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/16899-76867/.minikube/ca.pem
	I0717 16:07:15.159272   93835 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16899-76867/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/16899-76867/.minikube/ca.pem (1078 bytes)
	I0717 16:07:15.159504   93835 exec_runner.go:144] found /Users/jenkins/minikube-integration/16899-76867/.minikube/cert.pem, removing ...
	I0717 16:07:15.159510   93835 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/16899-76867/.minikube/cert.pem
	I0717 16:07:15.159575   93835 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16899-76867/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/16899-76867/.minikube/cert.pem (1123 bytes)
	I0717 16:07:15.159741   93835 exec_runner.go:144] found /Users/jenkins/minikube-integration/16899-76867/.minikube/key.pem, removing ...
	I0717 16:07:15.159750   93835 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/16899-76867/.minikube/key.pem
	I0717 16:07:15.159811   93835 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16899-76867/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/16899-76867/.minikube/key.pem (1675 bytes)
	I0717 16:07:15.159952   93835 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/16899-76867/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/16899-76867/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/16899-76867/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-770000 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube old-k8s-version-770000]
	I0717 16:07:15.214492   93835 provision.go:172] copyRemoteCerts
	I0717 16:07:15.214556   93835 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 16:07:15.214612   93835 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-770000
	I0717 16:07:15.264969   93835 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57348 SSHKeyPath:/Users/jenkins/minikube-integration/16899-76867/.minikube/machines/old-k8s-version-770000/id_rsa Username:docker}
	I0717 16:07:15.358630   93835 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 16:07:15.379986   93835 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0717 16:07:15.401339   93835 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0717 16:07:15.422595   93835 provision.go:86] duration metric: configureAuth took 312.996949ms
	I0717 16:07:15.422618   93835 ubuntu.go:193] setting minikube options for container-runtime
	I0717 16:07:15.422759   93835 config.go:182] Loaded profile config "old-k8s-version-770000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0717 16:07:15.422821   93835 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-770000
	I0717 16:07:15.473149   93835 main.go:141] libmachine: Using SSH client type: native
	I0717 16:07:15.473521   93835 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140d2c0] 0x1410360 <nil>  [] 0s} 127.0.0.1 57348 <nil> <nil>}
	I0717 16:07:15.473531   93835 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0717 16:07:15.601658   93835 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0717 16:07:15.601675   93835 ubuntu.go:71] root file system type: overlay
	I0717 16:07:15.601750   93835 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0717 16:07:15.601836   93835 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-770000
	I0717 16:07:15.652143   93835 main.go:141] libmachine: Using SSH client type: native
	I0717 16:07:15.652497   93835 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140d2c0] 0x1410360 <nil>  [] 0s} 127.0.0.1 57348 <nil> <nil>}
	I0717 16:07:15.652551   93835 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0717 16:07:15.792258   93835 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0717 16:07:15.792361   93835 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-770000
	I0717 16:07:15.842745   93835 main.go:141] libmachine: Using SSH client type: native
	I0717 16:07:15.843108   93835 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140d2c0] 0x1410360 <nil>  [] 0s} 127.0.0.1 57348 <nil> <nil>}
	I0717 16:07:15.843123   93835 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0717 16:07:15.976826   93835 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 16:07:15.976848   93835 machine.go:91] provisioned docker machine in 4.250924196s
	I0717 16:07:15.976859   93835 start.go:300] post-start starting for "old-k8s-version-770000" (driver="docker")
	I0717 16:07:15.976869   93835 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 16:07:15.976937   93835 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 16:07:15.976996   93835 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-770000
	I0717 16:07:16.048377   93835 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57348 SSHKeyPath:/Users/jenkins/minikube-integration/16899-76867/.minikube/machines/old-k8s-version-770000/id_rsa Username:docker}
	I0717 16:07:16.141120   93835 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 16:07:16.145237   93835 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0717 16:07:16.145262   93835 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0717 16:07:16.145270   93835 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0717 16:07:16.145275   93835 info.go:137] Remote host: Ubuntu 22.04.2 LTS
	I0717 16:07:16.145284   93835 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16899-76867/.minikube/addons for local assets ...
	I0717 16:07:16.145378   93835 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16899-76867/.minikube/files for local assets ...
	I0717 16:07:16.145551   93835 filesync.go:149] local asset: /Users/jenkins/minikube-integration/16899-76867/.minikube/files/etc/ssl/certs/773242.pem -> 773242.pem in /etc/ssl/certs
	I0717 16:07:16.145734   93835 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 16:07:16.154296   93835 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/files/etc/ssl/certs/773242.pem --> /etc/ssl/certs/773242.pem (1708 bytes)
	I0717 16:07:16.175347   93835 start.go:303] post-start completed in 198.475627ms
	I0717 16:07:16.175458   93835 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 16:07:16.175519   93835 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-770000
	I0717 16:07:16.225590   93835 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57348 SSHKeyPath:/Users/jenkins/minikube-integration/16899-76867/.minikube/machines/old-k8s-version-770000/id_rsa Username:docker}
	I0717 16:07:16.315505   93835 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0717 16:07:16.320755   93835 fix.go:56] fixHost completed within 5.048359557s
	I0717 16:07:16.320775   93835 start.go:83] releasing machines lock for "old-k8s-version-770000", held for 5.048420479s
	I0717 16:07:16.320862   93835 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-770000
	I0717 16:07:16.370862   93835 ssh_runner.go:195] Run: cat /version.json
	I0717 16:07:16.370907   93835 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 16:07:16.370938   93835 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-770000
	I0717 16:07:16.370992   93835 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-770000
	I0717 16:07:16.426625   93835 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57348 SSHKeyPath:/Users/jenkins/minikube-integration/16899-76867/.minikube/machines/old-k8s-version-770000/id_rsa Username:docker}
	I0717 16:07:16.426645   93835 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57348 SSHKeyPath:/Users/jenkins/minikube-integration/16899-76867/.minikube/machines/old-k8s-version-770000/id_rsa Username:docker}
	I0717 16:07:16.619780   93835 ssh_runner.go:195] Run: systemctl --version
	I0717 16:07:16.624995   93835 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0717 16:07:16.630769   93835 cni.go:215] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0717 16:07:16.630845   93835 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0717 16:07:16.639843   93835 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0717 16:07:16.648578   93835 cni.go:311] no active bridge cni configs found in "/etc/cni/net.d" - nothing to configure
	I0717 16:07:16.648591   93835 start.go:466] detecting cgroup driver to use...
	I0717 16:07:16.648605   93835 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0717 16:07:16.648724   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 16:07:16.663843   93835 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.1"|' /etc/containerd/config.toml"
	I0717 16:07:16.673474   93835 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0717 16:07:16.683236   93835 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0717 16:07:16.683299   93835 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0717 16:07:16.693436   93835 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0717 16:07:16.703844   93835 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0717 16:07:16.714567   93835 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0717 16:07:16.724611   93835 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 16:07:16.735173   93835 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0717 16:07:16.745742   93835 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 16:07:16.754584   93835 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 16:07:16.763216   93835 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 16:07:16.834297   93835 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0717 16:07:16.919125   93835 start.go:466] detecting cgroup driver to use...
	I0717 16:07:16.919153   93835 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0717 16:07:16.919278   93835 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0717 16:07:16.932693   93835 cruntime.go:276] skipping containerd shutdown because we are bound to it
	I0717 16:07:16.932765   93835 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0717 16:07:16.945984   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 16:07:16.964301   93835 ssh_runner.go:195] Run: which cri-dockerd
	I0717 16:07:16.990003   93835 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0717 16:07:17.000826   93835 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0717 16:07:17.018268   93835 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0717 16:07:17.117155   93835 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0717 16:07:17.212351   93835 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0717 16:07:17.212367   93835 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0717 16:07:17.230156   93835 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 16:07:17.308096   93835 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0717 16:07:17.553637   93835 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0717 16:07:17.579759   93835 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0717 16:07:17.649203   93835 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 24.0.4 ...
	I0717 16:07:17.649450   93835 cli_runner.go:164] Run: docker exec -t old-k8s-version-770000 dig +short host.docker.internal
	I0717 16:07:17.764925   93835 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I0717 16:07:17.765044   93835 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I0717 16:07:17.770153   93835 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 16:07:17.781387   93835 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-770000
	I0717 16:07:17.832340   93835 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0717 16:07:17.832436   93835 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0717 16:07:17.852743   93835 docker.go:636] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0717 16:07:17.852772   93835 docker.go:642] registry.k8s.io/kube-apiserver:v1.16.0 wasn't preloaded
	I0717 16:07:17.852840   93835 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0717 16:07:17.862075   93835 ssh_runner.go:195] Run: which lz4
	I0717 16:07:17.866660   93835 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0717 16:07:17.871003   93835 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0717 16:07:17.871031   93835 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (369789069 bytes)
	I0717 16:07:22.867738   93835 docker.go:600] Took 5.001096 seconds to copy over tarball
	I0717 16:07:22.867820   93835 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0717 16:07:25.116551   93835 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.248679572s)
	I0717 16:07:25.116565   93835 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0717 16:07:25.170395   93835 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0717 16:07:25.182320   93835 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2499 bytes)
	I0717 16:07:25.200208   93835 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 16:07:25.279267   93835 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0717 16:07:25.734612   93835 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0717 16:07:25.755604   93835 docker.go:636] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I0717 16:07:25.755621   93835 docker.go:642] registry.k8s.io/kube-apiserver:v1.16.0 wasn't preloaded
	I0717 16:07:25.755637   93835 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0717 16:07:25.761370   93835 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0717 16:07:25.761387   93835 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0717 16:07:25.761449   93835 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0717 16:07:25.761459   93835 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0717 16:07:25.761466   93835 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0717 16:07:25.761377   93835 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 16:07:25.761511   93835 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0717 16:07:25.762431   93835 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0717 16:07:25.766455   93835 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0717 16:07:25.770062   93835 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0717 16:07:25.770093   93835 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 16:07:25.770087   93835 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0717 16:07:25.770104   93835 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0717 16:07:25.770205   93835 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0717 16:07:25.770329   93835 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0717 16:07:25.770652   93835 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0717 16:07:26.923212   93835 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0717 16:07:26.944821   93835 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0717 16:07:26.944863   93835 docker.go:316] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0717 16:07:26.944913   93835 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.16.0
	I0717 16:07:26.968261   93835 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/16899-76867/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0717 16:07:27.308715   93835 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 16:07:27.440229   93835 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0717 16:07:27.463282   93835 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0717 16:07:27.463316   93835 docker.go:316] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0717 16:07:27.463378   93835 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0717 16:07:27.486009   93835 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/16899-76867/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0717 16:07:27.490090   93835 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0717 16:07:27.511788   93835 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0717 16:07:27.511816   93835 docker.go:316] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0717 16:07:27.511875   93835 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.3.15-0
	I0717 16:07:27.533759   93835 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/16899-76867/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0717 16:07:27.687477   93835 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0717 16:07:27.710726   93835 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0717 16:07:27.710756   93835 docker.go:316] Removing image: registry.k8s.io/coredns:1.6.2
	I0717 16:07:27.710830   93835 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.6.2
	I0717 16:07:27.732661   93835 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/16899-76867/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0717 16:07:27.925409   93835 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0717 16:07:27.947871   93835 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0717 16:07:27.947900   93835 docker.go:316] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0717 16:07:27.947964   93835 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0717 16:07:27.968797   93835 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/16899-76867/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0717 16:07:28.228561   93835 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0717 16:07:28.250519   93835 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0717 16:07:28.250548   93835 docker.go:316] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0717 16:07:28.250613   93835 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0717 16:07:28.269581   93835 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/16899-76867/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0717 16:07:28.521953   93835 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0717 16:07:28.542773   93835 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0717 16:07:28.542799   93835 docker.go:316] Removing image: registry.k8s.io/pause:3.1
	I0717 16:07:28.542872   93835 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.1
	I0717 16:07:28.562791   93835 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/16899-76867/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0717 16:07:28.562853   93835 cache_images.go:92] LoadImages completed in 2.807173193s
	W0717 16:07:28.562908   93835 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/16899-76867/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0: no such file or directory
	X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/16899-76867/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0: no such file or directory
	I0717 16:07:28.562978   93835 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0717 16:07:28.617905   93835 cni.go:84] Creating CNI manager for ""
	I0717 16:07:28.617923   93835 cni.go:168] CNI unnecessary in this configuration, recommending no CNI
	I0717 16:07:28.617943   93835 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0717 16:07:28.617960   93835 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-770000 NodeName:old-k8s-version-770000 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0717 16:07:28.618079   93835 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "old-k8s-version-770000"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-770000
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.76.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 16:07:28.618151   93835 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=old-k8s-version-770000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-770000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0717 16:07:28.618215   93835 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0717 16:07:28.628177   93835 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 16:07:28.628248   93835 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 16:07:28.637350   93835 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (348 bytes)
	I0717 16:07:28.654606   93835 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 16:07:28.671440   93835 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2174 bytes)
	I0717 16:07:28.688380   93835 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0717 16:07:28.693312   93835 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 16:07:28.704621   93835 certs.go:56] Setting up /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/old-k8s-version-770000 for IP: 192.168.76.2
	I0717 16:07:28.704639   93835 certs.go:190] acquiring lock for shared ca certs: {Name:mk8dc1f2afa352f9c2168154d4ab47beda1b6a37 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 16:07:28.704861   93835 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/16899-76867/.minikube/ca.key
	I0717 16:07:28.704960   93835 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/16899-76867/.minikube/proxy-client-ca.key
	I0717 16:07:28.705091   93835 certs.go:315] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/old-k8s-version-770000/client.key
	I0717 16:07:28.705164   93835 certs.go:315] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/old-k8s-version-770000/apiserver.key.31bdca25
	I0717 16:07:28.705229   93835 certs.go:315] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/old-k8s-version-770000/proxy-client.key
	I0717 16:07:28.705492   93835 certs.go:437] found cert: /Users/jenkins/minikube-integration/16899-76867/.minikube/certs/Users/jenkins/minikube-integration/16899-76867/.minikube/certs/77324.pem (1338 bytes)
	W0717 16:07:28.705539   93835 certs.go:433] ignoring /Users/jenkins/minikube-integration/16899-76867/.minikube/certs/Users/jenkins/minikube-integration/16899-76867/.minikube/certs/77324_empty.pem, impossibly tiny 0 bytes
	I0717 16:07:28.705550   93835 certs.go:437] found cert: /Users/jenkins/minikube-integration/16899-76867/.minikube/certs/Users/jenkins/minikube-integration/16899-76867/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 16:07:28.705590   93835 certs.go:437] found cert: /Users/jenkins/minikube-integration/16899-76867/.minikube/certs/Users/jenkins/minikube-integration/16899-76867/.minikube/certs/ca.pem (1078 bytes)
	I0717 16:07:28.705625   93835 certs.go:437] found cert: /Users/jenkins/minikube-integration/16899-76867/.minikube/certs/Users/jenkins/minikube-integration/16899-76867/.minikube/certs/cert.pem (1123 bytes)
	I0717 16:07:28.705660   93835 certs.go:437] found cert: /Users/jenkins/minikube-integration/16899-76867/.minikube/certs/Users/jenkins/minikube-integration/16899-76867/.minikube/certs/key.pem (1675 bytes)
	I0717 16:07:28.705729   93835 certs.go:437] found cert: /Users/jenkins/minikube-integration/16899-76867/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/16899-76867/.minikube/files/etc/ssl/certs/773242.pem (1708 bytes)
	I0717 16:07:28.706278   93835 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/old-k8s-version-770000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0717 16:07:28.728915   93835 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/old-k8s-version-770000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0717 16:07:28.751667   93835 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/old-k8s-version-770000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 16:07:28.774142   93835 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/old-k8s-version-770000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0717 16:07:28.797550   93835 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 16:07:28.820520   93835 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 16:07:28.844441   93835 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 16:07:28.866568   93835 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0717 16:07:28.889343   93835 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/certs/77324.pem --> /usr/share/ca-certificates/77324.pem (1338 bytes)
	I0717 16:07:28.911625   93835 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/files/etc/ssl/certs/773242.pem --> /usr/share/ca-certificates/773242.pem (1708 bytes)
	I0717 16:07:28.933806   93835 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 16:07:28.956700   93835 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 16:07:28.975302   93835 ssh_runner.go:195] Run: openssl version
	I0717 16:07:28.981863   93835 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/77324.pem && ln -fs /usr/share/ca-certificates/77324.pem /etc/ssl/certs/77324.pem"
	I0717 16:07:28.994170   93835 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/77324.pem
	I0717 16:07:28.999466   93835 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jul 17 22:13 /usr/share/ca-certificates/77324.pem
	I0717 16:07:28.999518   93835 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/77324.pem
	I0717 16:07:29.007366   93835 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/77324.pem /etc/ssl/certs/51391683.0"
	I0717 16:07:29.017549   93835 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/773242.pem && ln -fs /usr/share/ca-certificates/773242.pem /etc/ssl/certs/773242.pem"
	I0717 16:07:29.028678   93835 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/773242.pem
	I0717 16:07:29.033683   93835 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jul 17 22:13 /usr/share/ca-certificates/773242.pem
	I0717 16:07:29.033734   93835 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/773242.pem
	I0717 16:07:29.041371   93835 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/773242.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 16:07:29.051455   93835 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 16:07:29.061736   93835 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 16:07:29.066903   93835 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jul 17 22:07 /usr/share/ca-certificates/minikubeCA.pem
	I0717 16:07:29.066955   93835 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 16:07:29.074187   93835 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 16:07:29.083424   93835 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0717 16:07:29.088553   93835 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 16:07:29.095728   93835 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 16:07:29.102680   93835 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 16:07:29.109757   93835 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 16:07:29.116910   93835 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 16:07:29.124258   93835 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 16:07:29.131335   93835 kubeadm.go:404] StartCluster: {Name:old-k8s-version-770000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-770000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs
:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26214
4 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 16:07:29.131463   93835 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0717 16:07:29.152468   93835 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 16:07:29.162010   93835 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0717 16:07:29.162029   93835 kubeadm.go:636] restartCluster start
	I0717 16:07:29.162084   93835 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 16:07:29.170784   93835 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 16:07:29.170885   93835 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-770000
	I0717 16:07:29.225290   93835 kubeconfig.go:135] verify returned: extract IP: "old-k8s-version-770000" does not appear in /Users/jenkins/minikube-integration/16899-76867/kubeconfig
	I0717 16:07:29.225461   93835 kubeconfig.go:146] "old-k8s-version-770000" context is missing from /Users/jenkins/minikube-integration/16899-76867/kubeconfig - will repair!
	I0717 16:07:29.225816   93835 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16899-76867/kubeconfig: {Name:mk7ebdcff64e7ccd84e22cec95bc3c8ecbf54564 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 16:07:29.227371   93835 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 16:07:29.238294   93835 api_server.go:166] Checking apiserver status ...
	I0717 16:07:29.238363   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 16:07:29.248758   93835 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 16:07:29.748870   93835 api_server.go:166] Checking apiserver status ...
	I0717 16:07:29.748956   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 16:07:29.759238   93835 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 16:07:30.249286   93835 api_server.go:166] Checking apiserver status ...
	I0717 16:07:30.249371   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 16:07:30.260611   93835 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 16:07:30.750306   93835 api_server.go:166] Checking apiserver status ...
	I0717 16:07:30.750453   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 16:07:30.762407   93835 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 16:07:31.248847   93835 api_server.go:166] Checking apiserver status ...
	I0717 16:07:31.248975   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 16:07:31.260365   93835 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 16:07:31.748843   93835 api_server.go:166] Checking apiserver status ...
	I0717 16:07:31.748907   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 16:07:31.759835   93835 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 16:07:32.248858   93835 api_server.go:166] Checking apiserver status ...
	I0717 16:07:32.248953   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 16:07:32.259860   93835 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 16:07:32.749539   93835 api_server.go:166] Checking apiserver status ...
	I0717 16:07:32.749624   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 16:07:32.760272   93835 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 16:07:33.248928   93835 api_server.go:166] Checking apiserver status ...
	I0717 16:07:33.249011   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 16:07:33.259794   93835 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 16:07:33.749469   93835 api_server.go:166] Checking apiserver status ...
	I0717 16:07:33.749556   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 16:07:33.760107   93835 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 16:07:34.249107   93835 api_server.go:166] Checking apiserver status ...
	I0717 16:07:34.249216   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 16:07:34.260588   93835 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 16:07:34.750904   93835 api_server.go:166] Checking apiserver status ...
	I0717 16:07:34.751068   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 16:07:34.762708   93835 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 16:07:35.248946   93835 api_server.go:166] Checking apiserver status ...
	I0717 16:07:35.249019   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 16:07:35.259444   93835 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 16:07:35.750027   93835 api_server.go:166] Checking apiserver status ...
	I0717 16:07:35.750134   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 16:07:35.760618   93835 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 16:07:36.249214   93835 api_server.go:166] Checking apiserver status ...
	I0717 16:07:36.249300   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 16:07:36.260585   93835 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 16:07:36.748905   93835 api_server.go:166] Checking apiserver status ...
	I0717 16:07:36.749007   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 16:07:36.760897   93835 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 16:07:37.249160   93835 api_server.go:166] Checking apiserver status ...
	I0717 16:07:37.249236   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 16:07:37.260178   93835 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 16:07:37.748956   93835 api_server.go:166] Checking apiserver status ...
	I0717 16:07:37.749084   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 16:07:37.760316   93835 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 16:07:38.250565   93835 api_server.go:166] Checking apiserver status ...
	I0717 16:07:38.250665   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 16:07:38.262926   93835 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 16:07:38.749127   93835 api_server.go:166] Checking apiserver status ...
	I0717 16:07:38.749224   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 16:07:38.761243   93835 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 16:07:39.238740   93835 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0717 16:07:39.238756   93835 kubeadm.go:1128] stopping kube-system containers ...
	I0717 16:07:39.238830   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0717 16:07:39.257562   93835 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0717 16:07:39.269753   93835 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 16:07:39.278843   93835 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5695 Jul 17 23:03 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5731 Jul 17 23:03 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5791 Jul 17 23:03 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5675 Jul 17 23:03 /etc/kubernetes/scheduler.conf
	
	I0717 16:07:39.278909   93835 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 16:07:39.287974   93835 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 16:07:39.297085   93835 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 16:07:39.306207   93835 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 16:07:39.315385   93835 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 16:07:39.324435   93835 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0717 16:07:39.324449   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 16:07:39.381894   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 16:07:40.236047   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0717 16:07:40.428685   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 16:07:40.509595   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0717 16:07:40.571659   93835 api_server.go:52] waiting for apiserver process to appear ...
	I0717 16:07:40.571733   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:07:41.084271   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:07:41.583197   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:07:42.082188   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:07:42.582524   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:07:43.084274   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:07:43.582736   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:07:44.083433   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:07:44.583024   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:07:45.082224   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:07:45.582965   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:07:46.084372   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:07:46.582256   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:07:47.082606   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:07:47.583774   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:07:48.082859   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:07:48.583113   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:07:49.084332   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:07:49.582651   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:07:50.084341   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:07:50.584337   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:07:51.083442   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:07:51.583339   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:07:52.082678   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:07:52.582524   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:07:53.082582   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:07:53.582479   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:07:54.083404   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:07:54.583138   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:07:55.082515   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:07:55.582779   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:07:56.084428   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:07:56.582723   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:07:57.082649   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:07:57.583944   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:07:58.083028   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:07:58.583324   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:07:59.084445   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:07:59.582685   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:08:00.082910   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:08:00.584462   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:08:01.082535   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:08:01.582611   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:08:02.084493   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:08:02.583368   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:08:03.082569   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:08:03.582930   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:08:04.084012   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:08:04.582531   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:08:05.082606   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:08:05.583002   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:08:06.083316   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:08:06.582953   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:08:07.082780   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:08:07.582506   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:08:08.082914   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:08:08.583164   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:08:09.082629   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:08:09.584580   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:08:10.082693   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:08:10.583508   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:08:11.082988   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:08:11.583418   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:08:12.082818   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:08:12.583354   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:08:13.084604   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:08:13.582629   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:08:14.082589   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:08:14.583437   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:08:15.082557   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:08:15.584672   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:08:16.082766   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:08:16.582647   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:08:17.083087   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:08:17.584739   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:08:18.083432   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:08:18.582671   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:08:19.083442   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:08:19.583633   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:08:20.082651   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:08:20.582661   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:08:21.082677   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:08:21.584742   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:08:22.082646   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:08:22.583456   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:08:23.084273   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:08:23.583891   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:08:24.083202   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:08:24.583168   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:08:25.082669   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:08:25.583185   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:08:26.082678   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:08:26.582752   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:08:27.083298   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:08:27.582735   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:08:28.083258   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:08:28.582767   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:08:29.082874   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:08:29.582658   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:08:30.083215   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:08:30.582689   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:08:31.083574   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:08:31.582859   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:08:32.082773   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:08:32.583358   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:08:33.082806   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:08:33.584698   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:08:34.083114   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:08:34.582853   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:08:35.083780   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:08:35.584723   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:08:36.083491   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:08:36.584854   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:08:37.082972   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:08:37.583860   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:08:38.083386   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:08:38.583871   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:08:39.083492   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:08:39.583664   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:08:40.084826   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:08:40.584803   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 16:08:40.607429   93835 logs.go:284] 0 containers: []
	W0717 16:08:40.607448   93835 logs.go:286] No container was found matching "kube-apiserver"
	I0717 16:08:40.607549   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 16:08:40.629088   93835 logs.go:284] 0 containers: []
	W0717 16:08:40.629102   93835 logs.go:286] No container was found matching "etcd"
	I0717 16:08:40.629201   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 16:08:40.649794   93835 logs.go:284] 0 containers: []
	W0717 16:08:40.649808   93835 logs.go:286] No container was found matching "coredns"
	I0717 16:08:40.649882   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 16:08:40.672461   93835 logs.go:284] 0 containers: []
	W0717 16:08:40.672476   93835 logs.go:286] No container was found matching "kube-scheduler"
	I0717 16:08:40.672560   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 16:08:40.693499   93835 logs.go:284] 0 containers: []
	W0717 16:08:40.693514   93835 logs.go:286] No container was found matching "kube-proxy"
	I0717 16:08:40.693588   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 16:08:40.712630   93835 logs.go:284] 0 containers: []
	W0717 16:08:40.712644   93835 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 16:08:40.712718   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 16:08:40.733755   93835 logs.go:284] 0 containers: []
	W0717 16:08:40.733769   93835 logs.go:286] No container was found matching "kindnet"
	I0717 16:08:40.733838   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 16:08:40.755489   93835 logs.go:284] 0 containers: []
	W0717 16:08:40.755503   93835 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 16:08:40.755514   93835 logs.go:123] Gathering logs for container status ...
	I0717 16:08:40.755523   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 16:08:40.813417   93835 logs.go:123] Gathering logs for kubelet ...
	I0717 16:08:40.813433   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 16:08:40.859957   93835 logs.go:123] Gathering logs for dmesg ...
	I0717 16:08:40.859983   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 16:08:40.876259   93835 logs.go:123] Gathering logs for describe nodes ...
	I0717 16:08:40.876275   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 16:08:40.946521   93835 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 16:08:40.946543   93835 logs.go:123] Gathering logs for Docker ...
	I0717 16:08:40.946551   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 16:08:43.467038   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:08:43.479352   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 16:08:43.499052   93835 logs.go:284] 0 containers: []
	W0717 16:08:43.499067   93835 logs.go:286] No container was found matching "kube-apiserver"
	I0717 16:08:43.499145   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 16:08:43.519063   93835 logs.go:284] 0 containers: []
	W0717 16:08:43.519077   93835 logs.go:286] No container was found matching "etcd"
	I0717 16:08:43.519153   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 16:08:43.539881   93835 logs.go:284] 0 containers: []
	W0717 16:08:43.539905   93835 logs.go:286] No container was found matching "coredns"
	I0717 16:08:43.540044   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 16:08:43.596122   93835 logs.go:284] 0 containers: []
	W0717 16:08:43.596139   93835 logs.go:286] No container was found matching "kube-scheduler"
	I0717 16:08:43.596214   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 16:08:43.616180   93835 logs.go:284] 0 containers: []
	W0717 16:08:43.616194   93835 logs.go:286] No container was found matching "kube-proxy"
	I0717 16:08:43.616265   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 16:08:43.637680   93835 logs.go:284] 0 containers: []
	W0717 16:08:43.637694   93835 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 16:08:43.637768   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 16:08:43.658951   93835 logs.go:284] 0 containers: []
	W0717 16:08:43.658966   93835 logs.go:286] No container was found matching "kindnet"
	I0717 16:08:43.659033   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 16:08:43.681800   93835 logs.go:284] 0 containers: []
	W0717 16:08:43.681814   93835 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 16:08:43.681821   93835 logs.go:123] Gathering logs for kubelet ...
	I0717 16:08:43.681828   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 16:08:43.724345   93835 logs.go:123] Gathering logs for dmesg ...
	I0717 16:08:43.724372   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 16:08:43.738848   93835 logs.go:123] Gathering logs for describe nodes ...
	I0717 16:08:43.738900   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 16:08:43.798415   93835 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 16:08:43.798430   93835 logs.go:123] Gathering logs for Docker ...
	I0717 16:08:43.798439   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 16:08:43.814970   93835 logs.go:123] Gathering logs for container status ...
	I0717 16:08:43.814985   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 16:08:46.369236   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:08:46.381658   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 16:08:46.400681   93835 logs.go:284] 0 containers: []
	W0717 16:08:46.400694   93835 logs.go:286] No container was found matching "kube-apiserver"
	I0717 16:08:46.400762   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 16:08:46.419865   93835 logs.go:284] 0 containers: []
	W0717 16:08:46.419879   93835 logs.go:286] No container was found matching "etcd"
	I0717 16:08:46.419948   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 16:08:46.440041   93835 logs.go:284] 0 containers: []
	W0717 16:08:46.440059   93835 logs.go:286] No container was found matching "coredns"
	I0717 16:08:46.440134   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 16:08:46.460378   93835 logs.go:284] 0 containers: []
	W0717 16:08:46.460394   93835 logs.go:286] No container was found matching "kube-scheduler"
	I0717 16:08:46.460469   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 16:08:46.480031   93835 logs.go:284] 0 containers: []
	W0717 16:08:46.480062   93835 logs.go:286] No container was found matching "kube-proxy"
	I0717 16:08:46.480206   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 16:08:46.501293   93835 logs.go:284] 0 containers: []
	W0717 16:08:46.501307   93835 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 16:08:46.501391   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 16:08:46.523016   93835 logs.go:284] 0 containers: []
	W0717 16:08:46.523030   93835 logs.go:286] No container was found matching "kindnet"
	I0717 16:08:46.523101   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 16:08:46.543552   93835 logs.go:284] 0 containers: []
	W0717 16:08:46.543569   93835 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 16:08:46.543577   93835 logs.go:123] Gathering logs for kubelet ...
	I0717 16:08:46.543586   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 16:08:46.613849   93835 logs.go:123] Gathering logs for dmesg ...
	I0717 16:08:46.613866   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 16:08:46.630007   93835 logs.go:123] Gathering logs for describe nodes ...
	I0717 16:08:46.630033   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 16:08:46.694688   93835 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 16:08:46.694708   93835 logs.go:123] Gathering logs for Docker ...
	I0717 16:08:46.694716   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 16:08:46.710614   93835 logs.go:123] Gathering logs for container status ...
	I0717 16:08:46.710627   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 16:08:49.263211   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:08:49.275395   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 16:08:49.294283   93835 logs.go:284] 0 containers: []
	W0717 16:08:49.294297   93835 logs.go:286] No container was found matching "kube-apiserver"
	I0717 16:08:49.294370   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 16:08:49.313611   93835 logs.go:284] 0 containers: []
	W0717 16:08:49.313624   93835 logs.go:286] No container was found matching "etcd"
	I0717 16:08:49.313702   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 16:08:49.333624   93835 logs.go:284] 0 containers: []
	W0717 16:08:49.333637   93835 logs.go:286] No container was found matching "coredns"
	I0717 16:08:49.333703   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 16:08:49.352873   93835 logs.go:284] 0 containers: []
	W0717 16:08:49.352889   93835 logs.go:286] No container was found matching "kube-scheduler"
	I0717 16:08:49.352957   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 16:08:49.371742   93835 logs.go:284] 0 containers: []
	W0717 16:08:49.371756   93835 logs.go:286] No container was found matching "kube-proxy"
	I0717 16:08:49.371833   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 16:08:49.391529   93835 logs.go:284] 0 containers: []
	W0717 16:08:49.391542   93835 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 16:08:49.391628   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 16:08:49.411149   93835 logs.go:284] 0 containers: []
	W0717 16:08:49.411164   93835 logs.go:286] No container was found matching "kindnet"
	I0717 16:08:49.411235   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 16:08:49.430947   93835 logs.go:284] 0 containers: []
	W0717 16:08:49.430961   93835 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 16:08:49.430968   93835 logs.go:123] Gathering logs for kubelet ...
	I0717 16:08:49.430975   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 16:08:49.470963   93835 logs.go:123] Gathering logs for dmesg ...
	I0717 16:08:49.470977   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 16:08:49.484795   93835 logs.go:123] Gathering logs for describe nodes ...
	I0717 16:08:49.484811   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 16:08:49.542433   93835 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 16:08:49.542450   93835 logs.go:123] Gathering logs for Docker ...
	I0717 16:08:49.542457   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 16:08:49.559270   93835 logs.go:123] Gathering logs for container status ...
	I0717 16:08:49.559286   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 16:08:52.113743   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:08:52.125965   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 16:08:52.147815   93835 logs.go:284] 0 containers: []
	W0717 16:08:52.147830   93835 logs.go:286] No container was found matching "kube-apiserver"
	I0717 16:08:52.147902   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 16:08:52.168537   93835 logs.go:284] 0 containers: []
	W0717 16:08:52.168554   93835 logs.go:286] No container was found matching "etcd"
	I0717 16:08:52.168647   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 16:08:52.189732   93835 logs.go:284] 0 containers: []
	W0717 16:08:52.189746   93835 logs.go:286] No container was found matching "coredns"
	I0717 16:08:52.189819   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 16:08:52.212061   93835 logs.go:284] 0 containers: []
	W0717 16:08:52.212074   93835 logs.go:286] No container was found matching "kube-scheduler"
	I0717 16:08:52.212143   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 16:08:52.231684   93835 logs.go:284] 0 containers: []
	W0717 16:08:52.231697   93835 logs.go:286] No container was found matching "kube-proxy"
	I0717 16:08:52.231762   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 16:08:52.251051   93835 logs.go:284] 0 containers: []
	W0717 16:08:52.251064   93835 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 16:08:52.251145   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 16:08:52.273274   93835 logs.go:284] 0 containers: []
	W0717 16:08:52.273290   93835 logs.go:286] No container was found matching "kindnet"
	I0717 16:08:52.273364   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 16:08:52.295858   93835 logs.go:284] 0 containers: []
	W0717 16:08:52.295878   93835 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 16:08:52.295886   93835 logs.go:123] Gathering logs for describe nodes ...
	I0717 16:08:52.295896   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 16:08:52.354769   93835 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 16:08:52.354782   93835 logs.go:123] Gathering logs for Docker ...
	I0717 16:08:52.354807   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 16:08:52.371676   93835 logs.go:123] Gathering logs for container status ...
	I0717 16:08:52.371692   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 16:08:52.429428   93835 logs.go:123] Gathering logs for kubelet ...
	I0717 16:08:52.429446   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 16:08:52.470232   93835 logs.go:123] Gathering logs for dmesg ...
	I0717 16:08:52.470250   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 16:08:54.985925   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:08:54.999975   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 16:08:55.043460   93835 logs.go:284] 0 containers: []
	W0717 16:08:55.043479   93835 logs.go:286] No container was found matching "kube-apiserver"
	I0717 16:08:55.043591   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 16:08:55.067898   93835 logs.go:284] 0 containers: []
	W0717 16:08:55.067917   93835 logs.go:286] No container was found matching "etcd"
	I0717 16:08:55.068005   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 16:08:55.101155   93835 logs.go:284] 0 containers: []
	W0717 16:08:55.101174   93835 logs.go:286] No container was found matching "coredns"
	I0717 16:08:55.101270   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 16:08:55.128362   93835 logs.go:284] 0 containers: []
	W0717 16:08:55.128383   93835 logs.go:286] No container was found matching "kube-scheduler"
	I0717 16:08:55.128480   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 16:08:55.150964   93835 logs.go:284] 0 containers: []
	W0717 16:08:55.150999   93835 logs.go:286] No container was found matching "kube-proxy"
	I0717 16:08:55.151070   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 16:08:55.170665   93835 logs.go:284] 0 containers: []
	W0717 16:08:55.170677   93835 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 16:08:55.170743   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 16:08:55.193200   93835 logs.go:284] 0 containers: []
	W0717 16:08:55.193219   93835 logs.go:286] No container was found matching "kindnet"
	I0717 16:08:55.193330   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 16:08:55.219208   93835 logs.go:284] 0 containers: []
	W0717 16:08:55.219232   93835 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 16:08:55.219243   93835 logs.go:123] Gathering logs for kubelet ...
	I0717 16:08:55.219252   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 16:08:55.266356   93835 logs.go:123] Gathering logs for dmesg ...
	I0717 16:08:55.266374   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 16:08:55.281050   93835 logs.go:123] Gathering logs for describe nodes ...
	I0717 16:08:55.281066   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 16:08:55.354851   93835 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 16:08:55.354867   93835 logs.go:123] Gathering logs for Docker ...
	I0717 16:08:55.354878   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 16:08:55.371644   93835 logs.go:123] Gathering logs for container status ...
	I0717 16:08:55.371658   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 16:08:57.943231   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:08:57.956911   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 16:08:57.976799   93835 logs.go:284] 0 containers: []
	W0717 16:08:57.976811   93835 logs.go:286] No container was found matching "kube-apiserver"
	I0717 16:08:57.976875   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 16:08:57.996611   93835 logs.go:284] 0 containers: []
	W0717 16:08:57.996627   93835 logs.go:286] No container was found matching "etcd"
	I0717 16:08:57.996699   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 16:08:58.018486   93835 logs.go:284] 0 containers: []
	W0717 16:08:58.018500   93835 logs.go:286] No container was found matching "coredns"
	I0717 16:08:58.018583   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 16:08:58.039273   93835 logs.go:284] 0 containers: []
	W0717 16:08:58.039287   93835 logs.go:286] No container was found matching "kube-scheduler"
	I0717 16:08:58.039363   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 16:08:58.090178   93835 logs.go:284] 0 containers: []
	W0717 16:08:58.090193   93835 logs.go:286] No container was found matching "kube-proxy"
	I0717 16:08:58.090265   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 16:08:58.115917   93835 logs.go:284] 0 containers: []
	W0717 16:08:58.115930   93835 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 16:08:58.116009   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 16:08:58.137346   93835 logs.go:284] 0 containers: []
	W0717 16:08:58.137360   93835 logs.go:286] No container was found matching "kindnet"
	I0717 16:08:58.137430   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 16:08:58.157361   93835 logs.go:284] 0 containers: []
	W0717 16:08:58.157377   93835 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 16:08:58.157383   93835 logs.go:123] Gathering logs for kubelet ...
	I0717 16:08:58.157391   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 16:08:58.198025   93835 logs.go:123] Gathering logs for dmesg ...
	I0717 16:08:58.198059   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 16:08:58.213538   93835 logs.go:123] Gathering logs for describe nodes ...
	I0717 16:08:58.213560   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 16:08:58.282820   93835 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 16:08:58.282863   93835 logs.go:123] Gathering logs for Docker ...
	I0717 16:08:58.282880   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 16:08:58.301373   93835 logs.go:123] Gathering logs for container status ...
	I0717 16:08:58.301392   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 16:09:00.866707   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:09:00.877565   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 16:09:00.898831   93835 logs.go:284] 0 containers: []
	W0717 16:09:00.898842   93835 logs.go:286] No container was found matching "kube-apiserver"
	I0717 16:09:00.898918   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 16:09:00.921933   93835 logs.go:284] 0 containers: []
	W0717 16:09:00.921947   93835 logs.go:286] No container was found matching "etcd"
	I0717 16:09:00.922016   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 16:09:00.942162   93835 logs.go:284] 0 containers: []
	W0717 16:09:00.942177   93835 logs.go:286] No container was found matching "coredns"
	I0717 16:09:00.942257   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 16:09:00.961979   93835 logs.go:284] 0 containers: []
	W0717 16:09:00.961993   93835 logs.go:286] No container was found matching "kube-scheduler"
	I0717 16:09:00.962061   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 16:09:00.984878   93835 logs.go:284] 0 containers: []
	W0717 16:09:00.984891   93835 logs.go:286] No container was found matching "kube-proxy"
	I0717 16:09:00.984965   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 16:09:01.008338   93835 logs.go:284] 0 containers: []
	W0717 16:09:01.008351   93835 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 16:09:01.008420   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 16:09:01.031532   93835 logs.go:284] 0 containers: []
	W0717 16:09:01.031552   93835 logs.go:286] No container was found matching "kindnet"
	I0717 16:09:01.031643   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 16:09:01.054584   93835 logs.go:284] 0 containers: []
	W0717 16:09:01.054610   93835 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 16:09:01.054621   93835 logs.go:123] Gathering logs for kubelet ...
	I0717 16:09:01.054639   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 16:09:01.099867   93835 logs.go:123] Gathering logs for dmesg ...
	I0717 16:09:01.099893   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 16:09:01.115285   93835 logs.go:123] Gathering logs for describe nodes ...
	I0717 16:09:01.115301   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 16:09:01.176394   93835 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 16:09:01.176410   93835 logs.go:123] Gathering logs for Docker ...
	I0717 16:09:01.176421   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 16:09:01.193090   93835 logs.go:123] Gathering logs for container status ...
	I0717 16:09:01.193105   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 16:09:03.751586   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:09:03.768153   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 16:09:03.796812   93835 logs.go:284] 0 containers: []
	W0717 16:09:03.796830   93835 logs.go:286] No container was found matching "kube-apiserver"
	I0717 16:09:03.796916   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 16:09:03.819012   93835 logs.go:284] 0 containers: []
	W0717 16:09:03.819029   93835 logs.go:286] No container was found matching "etcd"
	I0717 16:09:03.819100   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 16:09:03.839971   93835 logs.go:284] 0 containers: []
	W0717 16:09:03.839983   93835 logs.go:286] No container was found matching "coredns"
	I0717 16:09:03.840057   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 16:09:03.860470   93835 logs.go:284] 0 containers: []
	W0717 16:09:03.860483   93835 logs.go:286] No container was found matching "kube-scheduler"
	I0717 16:09:03.860561   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 16:09:03.881145   93835 logs.go:284] 0 containers: []
	W0717 16:09:03.881161   93835 logs.go:286] No container was found matching "kube-proxy"
	I0717 16:09:03.881236   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 16:09:03.902547   93835 logs.go:284] 0 containers: []
	W0717 16:09:03.902567   93835 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 16:09:03.902671   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 16:09:03.926353   93835 logs.go:284] 0 containers: []
	W0717 16:09:03.926368   93835 logs.go:286] No container was found matching "kindnet"
	I0717 16:09:03.926444   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 16:09:03.948352   93835 logs.go:284] 0 containers: []
	W0717 16:09:03.948370   93835 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 16:09:03.948380   93835 logs.go:123] Gathering logs for kubelet ...
	I0717 16:09:03.948393   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 16:09:03.995920   93835 logs.go:123] Gathering logs for dmesg ...
	I0717 16:09:03.995941   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 16:09:04.010768   93835 logs.go:123] Gathering logs for describe nodes ...
	I0717 16:09:04.010784   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 16:09:04.101611   93835 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 16:09:04.101625   93835 logs.go:123] Gathering logs for Docker ...
	I0717 16:09:04.101637   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 16:09:04.120019   93835 logs.go:123] Gathering logs for container status ...
	I0717 16:09:04.120060   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 16:09:06.677017   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:09:06.689692   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 16:09:06.708893   93835 logs.go:284] 0 containers: []
	W0717 16:09:06.708915   93835 logs.go:286] No container was found matching "kube-apiserver"
	I0717 16:09:06.708996   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 16:09:06.730196   93835 logs.go:284] 0 containers: []
	W0717 16:09:06.730211   93835 logs.go:286] No container was found matching "etcd"
	I0717 16:09:06.730291   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 16:09:06.752147   93835 logs.go:284] 0 containers: []
	W0717 16:09:06.752162   93835 logs.go:286] No container was found matching "coredns"
	I0717 16:09:06.752235   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 16:09:06.771443   93835 logs.go:284] 0 containers: []
	W0717 16:09:06.771460   93835 logs.go:286] No container was found matching "kube-scheduler"
	I0717 16:09:06.771526   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 16:09:06.792042   93835 logs.go:284] 0 containers: []
	W0717 16:09:06.792060   93835 logs.go:286] No container was found matching "kube-proxy"
	I0717 16:09:06.792150   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 16:09:06.811658   93835 logs.go:284] 0 containers: []
	W0717 16:09:06.811675   93835 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 16:09:06.811759   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 16:09:06.832130   93835 logs.go:284] 0 containers: []
	W0717 16:09:06.832146   93835 logs.go:286] No container was found matching "kindnet"
	I0717 16:09:06.832233   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 16:09:06.853410   93835 logs.go:284] 0 containers: []
	W0717 16:09:06.853444   93835 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 16:09:06.853455   93835 logs.go:123] Gathering logs for describe nodes ...
	I0717 16:09:06.853466   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 16:09:06.916220   93835 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 16:09:06.916240   93835 logs.go:123] Gathering logs for Docker ...
	I0717 16:09:06.916250   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 16:09:06.933564   93835 logs.go:123] Gathering logs for container status ...
	I0717 16:09:06.933585   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 16:09:06.987673   93835 logs.go:123] Gathering logs for kubelet ...
	I0717 16:09:06.987697   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 16:09:07.032569   93835 logs.go:123] Gathering logs for dmesg ...
	I0717 16:09:07.032590   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 16:09:09.551616   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:09:09.563399   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 16:09:09.583224   93835 logs.go:284] 0 containers: []
	W0717 16:09:09.583236   93835 logs.go:286] No container was found matching "kube-apiserver"
	I0717 16:09:09.583303   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 16:09:09.604185   93835 logs.go:284] 0 containers: []
	W0717 16:09:09.604200   93835 logs.go:286] No container was found matching "etcd"
	I0717 16:09:09.604284   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 16:09:09.625346   93835 logs.go:284] 0 containers: []
	W0717 16:09:09.625359   93835 logs.go:286] No container was found matching "coredns"
	I0717 16:09:09.625428   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 16:09:09.648230   93835 logs.go:284] 0 containers: []
	W0717 16:09:09.648246   93835 logs.go:286] No container was found matching "kube-scheduler"
	I0717 16:09:09.648341   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 16:09:09.677115   93835 logs.go:284] 0 containers: []
	W0717 16:09:09.677129   93835 logs.go:286] No container was found matching "kube-proxy"
	I0717 16:09:09.677202   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 16:09:09.698090   93835 logs.go:284] 0 containers: []
	W0717 16:09:09.698105   93835 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 16:09:09.698192   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 16:09:09.718806   93835 logs.go:284] 0 containers: []
	W0717 16:09:09.718824   93835 logs.go:286] No container was found matching "kindnet"
	I0717 16:09:09.718908   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 16:09:09.740621   93835 logs.go:284] 0 containers: []
	W0717 16:09:09.740640   93835 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 16:09:09.740650   93835 logs.go:123] Gathering logs for Docker ...
	I0717 16:09:09.740662   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 16:09:09.758153   93835 logs.go:123] Gathering logs for container status ...
	I0717 16:09:09.758169   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 16:09:09.812867   93835 logs.go:123] Gathering logs for kubelet ...
	I0717 16:09:09.812882   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 16:09:09.862371   93835 logs.go:123] Gathering logs for dmesg ...
	I0717 16:09:09.862395   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 16:09:09.878032   93835 logs.go:123] Gathering logs for describe nodes ...
	I0717 16:09:09.878047   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 16:09:09.940288   93835 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 16:09:12.440804   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:09:12.453433   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 16:09:12.475131   93835 logs.go:284] 0 containers: []
	W0717 16:09:12.475144   93835 logs.go:286] No container was found matching "kube-apiserver"
	I0717 16:09:12.475225   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 16:09:12.496412   93835 logs.go:284] 0 containers: []
	W0717 16:09:12.496426   93835 logs.go:286] No container was found matching "etcd"
	I0717 16:09:12.496498   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 16:09:12.518311   93835 logs.go:284] 0 containers: []
	W0717 16:09:12.518326   93835 logs.go:286] No container was found matching "coredns"
	I0717 16:09:12.518401   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 16:09:12.541455   93835 logs.go:284] 0 containers: []
	W0717 16:09:12.541468   93835 logs.go:286] No container was found matching "kube-scheduler"
	I0717 16:09:12.541543   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 16:09:12.562979   93835 logs.go:284] 0 containers: []
	W0717 16:09:12.562991   93835 logs.go:286] No container was found matching "kube-proxy"
	I0717 16:09:12.563062   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 16:09:12.584381   93835 logs.go:284] 0 containers: []
	W0717 16:09:12.584401   93835 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 16:09:12.584473   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 16:09:12.606468   93835 logs.go:284] 0 containers: []
	W0717 16:09:12.606519   93835 logs.go:286] No container was found matching "kindnet"
	I0717 16:09:12.606602   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 16:09:12.628751   93835 logs.go:284] 0 containers: []
	W0717 16:09:12.628764   93835 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 16:09:12.628772   93835 logs.go:123] Gathering logs for kubelet ...
	I0717 16:09:12.628780   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 16:09:12.673147   93835 logs.go:123] Gathering logs for dmesg ...
	I0717 16:09:12.673162   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 16:09:12.688033   93835 logs.go:123] Gathering logs for describe nodes ...
	I0717 16:09:12.688050   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 16:09:12.748163   93835 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 16:09:12.748194   93835 logs.go:123] Gathering logs for Docker ...
	I0717 16:09:12.748201   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 16:09:12.765516   93835 logs.go:123] Gathering logs for container status ...
	I0717 16:09:12.765529   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 16:09:15.320045   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:09:15.331127   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 16:09:15.350586   93835 logs.go:284] 0 containers: []
	W0717 16:09:15.350600   93835 logs.go:286] No container was found matching "kube-apiserver"
	I0717 16:09:15.350675   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 16:09:15.370290   93835 logs.go:284] 0 containers: []
	W0717 16:09:15.370303   93835 logs.go:286] No container was found matching "etcd"
	I0717 16:09:15.370372   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 16:09:15.389682   93835 logs.go:284] 0 containers: []
	W0717 16:09:15.389697   93835 logs.go:286] No container was found matching "coredns"
	I0717 16:09:15.389791   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 16:09:15.409149   93835 logs.go:284] 0 containers: []
	W0717 16:09:15.409162   93835 logs.go:286] No container was found matching "kube-scheduler"
	I0717 16:09:15.409236   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 16:09:15.428148   93835 logs.go:284] 0 containers: []
	W0717 16:09:15.461210   93835 logs.go:286] No container was found matching "kube-proxy"
	I0717 16:09:15.461358   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 16:09:15.486436   93835 logs.go:284] 0 containers: []
	W0717 16:09:15.486450   93835 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 16:09:15.486520   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 16:09:15.506983   93835 logs.go:284] 0 containers: []
	W0717 16:09:15.506996   93835 logs.go:286] No container was found matching "kindnet"
	I0717 16:09:15.507080   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 16:09:15.526847   93835 logs.go:284] 0 containers: []
	W0717 16:09:15.526861   93835 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 16:09:15.526868   93835 logs.go:123] Gathering logs for describe nodes ...
	I0717 16:09:15.526876   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 16:09:15.586098   93835 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 16:09:15.586109   93835 logs.go:123] Gathering logs for Docker ...
	I0717 16:09:15.586116   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 16:09:15.601850   93835 logs.go:123] Gathering logs for container status ...
	I0717 16:09:15.601863   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 16:09:15.654951   93835 logs.go:123] Gathering logs for kubelet ...
	I0717 16:09:15.654966   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 16:09:15.696821   93835 logs.go:123] Gathering logs for dmesg ...
	I0717 16:09:15.696837   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 16:09:18.210827   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:09:18.223787   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 16:09:18.243771   93835 logs.go:284] 0 containers: []
	W0717 16:09:18.243789   93835 logs.go:286] No container was found matching "kube-apiserver"
	I0717 16:09:18.243856   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 16:09:18.265968   93835 logs.go:284] 0 containers: []
	W0717 16:09:18.265981   93835 logs.go:286] No container was found matching "etcd"
	I0717 16:09:18.266058   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 16:09:18.285684   93835 logs.go:284] 0 containers: []
	W0717 16:09:18.285698   93835 logs.go:286] No container was found matching "coredns"
	I0717 16:09:18.285790   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 16:09:18.307164   93835 logs.go:284] 0 containers: []
	W0717 16:09:18.307178   93835 logs.go:286] No container was found matching "kube-scheduler"
	I0717 16:09:18.307256   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 16:09:18.327917   93835 logs.go:284] 0 containers: []
	W0717 16:09:18.327930   93835 logs.go:286] No container was found matching "kube-proxy"
	I0717 16:09:18.327995   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 16:09:18.347802   93835 logs.go:284] 0 containers: []
	W0717 16:09:18.347815   93835 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 16:09:18.347883   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 16:09:18.366670   93835 logs.go:284] 0 containers: []
	W0717 16:09:18.366684   93835 logs.go:286] No container was found matching "kindnet"
	I0717 16:09:18.366777   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 16:09:18.403258   93835 logs.go:284] 0 containers: []
	W0717 16:09:18.403270   93835 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 16:09:18.403277   93835 logs.go:123] Gathering logs for kubelet ...
	I0717 16:09:18.403286   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 16:09:18.448235   93835 logs.go:123] Gathering logs for dmesg ...
	I0717 16:09:18.448251   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 16:09:18.463372   93835 logs.go:123] Gathering logs for describe nodes ...
	I0717 16:09:18.463387   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 16:09:18.528990   93835 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 16:09:18.529002   93835 logs.go:123] Gathering logs for Docker ...
	I0717 16:09:18.529009   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 16:09:18.544944   93835 logs.go:123] Gathering logs for container status ...
	I0717 16:09:18.544959   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 16:09:21.100701   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:09:21.113526   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 16:09:21.133649   93835 logs.go:284] 0 containers: []
	W0717 16:09:21.133662   93835 logs.go:286] No container was found matching "kube-apiserver"
	I0717 16:09:21.133733   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 16:09:21.152670   93835 logs.go:284] 0 containers: []
	W0717 16:09:21.152684   93835 logs.go:286] No container was found matching "etcd"
	I0717 16:09:21.152757   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 16:09:21.171814   93835 logs.go:284] 0 containers: []
	W0717 16:09:21.171828   93835 logs.go:286] No container was found matching "coredns"
	I0717 16:09:21.171901   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 16:09:21.190608   93835 logs.go:284] 0 containers: []
	W0717 16:09:21.190622   93835 logs.go:286] No container was found matching "kube-scheduler"
	I0717 16:09:21.190691   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 16:09:21.210297   93835 logs.go:284] 0 containers: []
	W0717 16:09:21.210310   93835 logs.go:286] No container was found matching "kube-proxy"
	I0717 16:09:21.210391   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 16:09:21.229239   93835 logs.go:284] 0 containers: []
	W0717 16:09:21.229252   93835 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 16:09:21.229342   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 16:09:21.248793   93835 logs.go:284] 0 containers: []
	W0717 16:09:21.248813   93835 logs.go:286] No container was found matching "kindnet"
	I0717 16:09:21.248897   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 16:09:21.270600   93835 logs.go:284] 0 containers: []
	W0717 16:09:21.270613   93835 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 16:09:21.270621   93835 logs.go:123] Gathering logs for kubelet ...
	I0717 16:09:21.270636   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 16:09:21.313589   93835 logs.go:123] Gathering logs for dmesg ...
	I0717 16:09:21.313612   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 16:09:21.328316   93835 logs.go:123] Gathering logs for describe nodes ...
	I0717 16:09:21.328331   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 16:09:21.387151   93835 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 16:09:21.387168   93835 logs.go:123] Gathering logs for Docker ...
	I0717 16:09:21.387175   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 16:09:21.404041   93835 logs.go:123] Gathering logs for container status ...
	I0717 16:09:21.404056   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 16:09:23.960262   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:09:23.972460   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 16:09:23.991382   93835 logs.go:284] 0 containers: []
	W0717 16:09:23.991393   93835 logs.go:286] No container was found matching "kube-apiserver"
	I0717 16:09:23.991448   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 16:09:24.010904   93835 logs.go:284] 0 containers: []
	W0717 16:09:24.010919   93835 logs.go:286] No container was found matching "etcd"
	I0717 16:09:24.010995   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 16:09:24.029712   93835 logs.go:284] 0 containers: []
	W0717 16:09:24.029726   93835 logs.go:286] No container was found matching "coredns"
	I0717 16:09:24.029793   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 16:09:24.049129   93835 logs.go:284] 0 containers: []
	W0717 16:09:24.049144   93835 logs.go:286] No container was found matching "kube-scheduler"
	I0717 16:09:24.049211   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 16:09:24.069598   93835 logs.go:284] 0 containers: []
	W0717 16:09:24.069611   93835 logs.go:286] No container was found matching "kube-proxy"
	I0717 16:09:24.069678   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 16:09:24.089018   93835 logs.go:284] 0 containers: []
	W0717 16:09:24.089031   93835 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 16:09:24.089101   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 16:09:24.107702   93835 logs.go:284] 0 containers: []
	W0717 16:09:24.107717   93835 logs.go:286] No container was found matching "kindnet"
	I0717 16:09:24.107785   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 16:09:24.128613   93835 logs.go:284] 0 containers: []
	W0717 16:09:24.128627   93835 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 16:09:24.128635   93835 logs.go:123] Gathering logs for kubelet ...
	I0717 16:09:24.128643   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 16:09:24.171598   93835 logs.go:123] Gathering logs for dmesg ...
	I0717 16:09:24.171613   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 16:09:24.185637   93835 logs.go:123] Gathering logs for describe nodes ...
	I0717 16:09:24.185664   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 16:09:24.244278   93835 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 16:09:24.244290   93835 logs.go:123] Gathering logs for Docker ...
	I0717 16:09:24.244298   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 16:09:24.261382   93835 logs.go:123] Gathering logs for container status ...
	I0717 16:09:24.261399   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 16:09:26.819453   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:09:26.832852   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 16:09:26.854399   93835 logs.go:284] 0 containers: []
	W0717 16:09:26.854410   93835 logs.go:286] No container was found matching "kube-apiserver"
	I0717 16:09:26.854479   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 16:09:26.876835   93835 logs.go:284] 0 containers: []
	W0717 16:09:26.876851   93835 logs.go:286] No container was found matching "etcd"
	I0717 16:09:26.876925   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 16:09:26.896389   93835 logs.go:284] 0 containers: []
	W0717 16:09:26.896404   93835 logs.go:286] No container was found matching "coredns"
	I0717 16:09:26.896473   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 16:09:26.915332   93835 logs.go:284] 0 containers: []
	W0717 16:09:26.915347   93835 logs.go:286] No container was found matching "kube-scheduler"
	I0717 16:09:26.915416   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 16:09:26.934736   93835 logs.go:284] 0 containers: []
	W0717 16:09:26.934748   93835 logs.go:286] No container was found matching "kube-proxy"
	I0717 16:09:26.934817   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 16:09:26.955972   93835 logs.go:284] 0 containers: []
	W0717 16:09:26.955985   93835 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 16:09:26.956053   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 16:09:26.977327   93835 logs.go:284] 0 containers: []
	W0717 16:09:26.977346   93835 logs.go:286] No container was found matching "kindnet"
	I0717 16:09:26.977413   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 16:09:27.002901   93835 logs.go:284] 0 containers: []
	W0717 16:09:27.002931   93835 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 16:09:27.002946   93835 logs.go:123] Gathering logs for container status ...
	I0717 16:09:27.002960   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 16:09:27.068323   93835 logs.go:123] Gathering logs for kubelet ...
	I0717 16:09:27.068338   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 16:09:27.110658   93835 logs.go:123] Gathering logs for dmesg ...
	I0717 16:09:27.110674   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 16:09:27.125956   93835 logs.go:123] Gathering logs for describe nodes ...
	I0717 16:09:27.125987   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 16:09:27.190498   93835 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 16:09:27.190514   93835 logs.go:123] Gathering logs for Docker ...
	I0717 16:09:27.190536   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 16:09:29.709627   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:09:29.721081   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 16:09:29.742139   93835 logs.go:284] 0 containers: []
	W0717 16:09:29.742151   93835 logs.go:286] No container was found matching "kube-apiserver"
	I0717 16:09:29.742211   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 16:09:29.766181   93835 logs.go:284] 0 containers: []
	W0717 16:09:29.766194   93835 logs.go:286] No container was found matching "etcd"
	I0717 16:09:29.766256   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 16:09:29.789509   93835 logs.go:284] 0 containers: []
	W0717 16:09:29.789524   93835 logs.go:286] No container was found matching "coredns"
	I0717 16:09:29.789606   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 16:09:29.814742   93835 logs.go:284] 0 containers: []
	W0717 16:09:29.814761   93835 logs.go:286] No container was found matching "kube-scheduler"
	I0717 16:09:29.814837   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 16:09:29.840430   93835 logs.go:284] 0 containers: []
	W0717 16:09:29.840455   93835 logs.go:286] No container was found matching "kube-proxy"
	I0717 16:09:29.840569   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 16:09:29.865180   93835 logs.go:284] 0 containers: []
	W0717 16:09:29.865193   93835 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 16:09:29.865259   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 16:09:29.900307   93835 logs.go:284] 0 containers: []
	W0717 16:09:29.900325   93835 logs.go:286] No container was found matching "kindnet"
	I0717 16:09:29.900406   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 16:09:29.924021   93835 logs.go:284] 0 containers: []
	W0717 16:09:29.924040   93835 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 16:09:29.924051   93835 logs.go:123] Gathering logs for kubelet ...
	I0717 16:09:29.924064   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 16:09:29.966462   93835 logs.go:123] Gathering logs for dmesg ...
	I0717 16:09:29.966478   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 16:09:29.987105   93835 logs.go:123] Gathering logs for describe nodes ...
	I0717 16:09:29.987140   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 16:09:30.055294   93835 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 16:09:30.055305   93835 logs.go:123] Gathering logs for Docker ...
	I0717 16:09:30.055312   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 16:09:30.073554   93835 logs.go:123] Gathering logs for container status ...
	I0717 16:09:30.073568   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 16:09:32.635234   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:09:32.650867   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 16:09:32.673522   93835 logs.go:284] 0 containers: []
	W0717 16:09:32.673536   93835 logs.go:286] No container was found matching "kube-apiserver"
	I0717 16:09:32.673611   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 16:09:32.698212   93835 logs.go:284] 0 containers: []
	W0717 16:09:32.698226   93835 logs.go:286] No container was found matching "etcd"
	I0717 16:09:32.698285   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 16:09:32.719163   93835 logs.go:284] 0 containers: []
	W0717 16:09:32.719176   93835 logs.go:286] No container was found matching "coredns"
	I0717 16:09:32.719245   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 16:09:32.740406   93835 logs.go:284] 0 containers: []
	W0717 16:09:32.740430   93835 logs.go:286] No container was found matching "kube-scheduler"
	I0717 16:09:32.740507   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 16:09:32.764075   93835 logs.go:284] 0 containers: []
	W0717 16:09:32.764090   93835 logs.go:286] No container was found matching "kube-proxy"
	I0717 16:09:32.764167   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 16:09:32.792529   93835 logs.go:284] 0 containers: []
	W0717 16:09:32.792543   93835 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 16:09:32.792606   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 16:09:32.818359   93835 logs.go:284] 0 containers: []
	W0717 16:09:32.818376   93835 logs.go:286] No container was found matching "kindnet"
	I0717 16:09:32.818464   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 16:09:32.842340   93835 logs.go:284] 0 containers: []
	W0717 16:09:32.842353   93835 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 16:09:32.842361   93835 logs.go:123] Gathering logs for kubelet ...
	I0717 16:09:32.842371   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 16:09:32.889275   93835 logs.go:123] Gathering logs for dmesg ...
	I0717 16:09:32.889296   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 16:09:32.905966   93835 logs.go:123] Gathering logs for describe nodes ...
	I0717 16:09:32.905983   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 16:09:32.970863   93835 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 16:09:32.970880   93835 logs.go:123] Gathering logs for Docker ...
	I0717 16:09:32.970890   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 16:09:32.990139   93835 logs.go:123] Gathering logs for container status ...
	I0717 16:09:32.990155   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 16:09:35.557713   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:09:35.576779   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 16:09:35.605573   93835 logs.go:284] 0 containers: []
	W0717 16:09:35.605594   93835 logs.go:286] No container was found matching "kube-apiserver"
	I0717 16:09:35.605704   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 16:09:35.627642   93835 logs.go:284] 0 containers: []
	W0717 16:09:35.627656   93835 logs.go:286] No container was found matching "etcd"
	I0717 16:09:35.627728   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 16:09:35.648240   93835 logs.go:284] 0 containers: []
	W0717 16:09:35.648261   93835 logs.go:286] No container was found matching "coredns"
	I0717 16:09:35.648347   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 16:09:35.672192   93835 logs.go:284] 0 containers: []
	W0717 16:09:35.672207   93835 logs.go:286] No container was found matching "kube-scheduler"
	I0717 16:09:35.672279   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 16:09:35.692054   93835 logs.go:284] 0 containers: []
	W0717 16:09:35.692067   93835 logs.go:286] No container was found matching "kube-proxy"
	I0717 16:09:35.692136   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 16:09:35.711450   93835 logs.go:284] 0 containers: []
	W0717 16:09:35.711466   93835 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 16:09:35.711534   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 16:09:35.732175   93835 logs.go:284] 0 containers: []
	W0717 16:09:35.732188   93835 logs.go:286] No container was found matching "kindnet"
	I0717 16:09:35.732258   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 16:09:35.754113   93835 logs.go:284] 0 containers: []
	W0717 16:09:35.754126   93835 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 16:09:35.754134   93835 logs.go:123] Gathering logs for kubelet ...
	I0717 16:09:35.754142   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 16:09:35.798357   93835 logs.go:123] Gathering logs for dmesg ...
	I0717 16:09:35.798377   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 16:09:35.813918   93835 logs.go:123] Gathering logs for describe nodes ...
	I0717 16:09:35.813936   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 16:09:35.873827   93835 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 16:09:35.873839   93835 logs.go:123] Gathering logs for Docker ...
	I0717 16:09:35.873846   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 16:09:35.890161   93835 logs.go:123] Gathering logs for container status ...
	I0717 16:09:35.890175   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 16:09:38.444547   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:09:38.456985   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 16:09:38.479542   93835 logs.go:284] 0 containers: []
	W0717 16:09:38.479557   93835 logs.go:286] No container was found matching "kube-apiserver"
	I0717 16:09:38.479644   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 16:09:38.501334   93835 logs.go:284] 0 containers: []
	W0717 16:09:38.501348   93835 logs.go:286] No container was found matching "etcd"
	I0717 16:09:38.501424   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 16:09:38.523371   93835 logs.go:284] 0 containers: []
	W0717 16:09:38.523387   93835 logs.go:286] No container was found matching "coredns"
	I0717 16:09:38.523452   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 16:09:38.544706   93835 logs.go:284] 0 containers: []
	W0717 16:09:38.544718   93835 logs.go:286] No container was found matching "kube-scheduler"
	I0717 16:09:38.544787   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 16:09:38.564564   93835 logs.go:284] 0 containers: []
	W0717 16:09:38.564578   93835 logs.go:286] No container was found matching "kube-proxy"
	I0717 16:09:38.564646   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 16:09:38.584880   93835 logs.go:284] 0 containers: []
	W0717 16:09:38.584895   93835 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 16:09:38.584971   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 16:09:38.606900   93835 logs.go:284] 0 containers: []
	W0717 16:09:38.606912   93835 logs.go:286] No container was found matching "kindnet"
	I0717 16:09:38.606992   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 16:09:38.626379   93835 logs.go:284] 0 containers: []
	W0717 16:09:38.626393   93835 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 16:09:38.626400   93835 logs.go:123] Gathering logs for Docker ...
	I0717 16:09:38.626408   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 16:09:38.642484   93835 logs.go:123] Gathering logs for container status ...
	I0717 16:09:38.642498   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 16:09:38.700691   93835 logs.go:123] Gathering logs for kubelet ...
	I0717 16:09:38.700708   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 16:09:38.751427   93835 logs.go:123] Gathering logs for dmesg ...
	I0717 16:09:38.751449   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 16:09:38.767550   93835 logs.go:123] Gathering logs for describe nodes ...
	I0717 16:09:38.767567   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 16:09:38.837471   93835 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 16:09:41.337712   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:09:41.348591   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 16:09:41.373883   93835 logs.go:284] 0 containers: []
	W0717 16:09:41.373916   93835 logs.go:286] No container was found matching "kube-apiserver"
	I0717 16:09:41.374012   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 16:09:41.394669   93835 logs.go:284] 0 containers: []
	W0717 16:09:41.394685   93835 logs.go:286] No container was found matching "etcd"
	I0717 16:09:41.394760   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 16:09:41.414841   93835 logs.go:284] 0 containers: []
	W0717 16:09:41.414854   93835 logs.go:286] No container was found matching "coredns"
	I0717 16:09:41.414927   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 16:09:41.436767   93835 logs.go:284] 0 containers: []
	W0717 16:09:41.436780   93835 logs.go:286] No container was found matching "kube-scheduler"
	I0717 16:09:41.436882   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 16:09:41.458943   93835 logs.go:284] 0 containers: []
	W0717 16:09:41.458958   93835 logs.go:286] No container was found matching "kube-proxy"
	I0717 16:09:41.459036   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 16:09:41.479504   93835 logs.go:284] 0 containers: []
	W0717 16:09:41.479517   93835 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 16:09:41.479586   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 16:09:41.500947   93835 logs.go:284] 0 containers: []
	W0717 16:09:41.500960   93835 logs.go:286] No container was found matching "kindnet"
	I0717 16:09:41.501047   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 16:09:41.526201   93835 logs.go:284] 0 containers: []
	W0717 16:09:41.526213   93835 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 16:09:41.526223   93835 logs.go:123] Gathering logs for kubelet ...
	I0717 16:09:41.526231   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 16:09:41.569905   93835 logs.go:123] Gathering logs for dmesg ...
	I0717 16:09:41.569924   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 16:09:41.584398   93835 logs.go:123] Gathering logs for describe nodes ...
	I0717 16:09:41.584414   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 16:09:41.648028   93835 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 16:09:41.648045   93835 logs.go:123] Gathering logs for Docker ...
	I0717 16:09:41.648053   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 16:09:41.665843   93835 logs.go:123] Gathering logs for container status ...
	I0717 16:09:41.665859   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 16:09:44.220266   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:09:44.232191   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 16:09:44.253658   93835 logs.go:284] 0 containers: []
	W0717 16:09:44.253672   93835 logs.go:286] No container was found matching "kube-apiserver"
	I0717 16:09:44.253739   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 16:09:44.273912   93835 logs.go:284] 0 containers: []
	W0717 16:09:44.273926   93835 logs.go:286] No container was found matching "etcd"
	I0717 16:09:44.274003   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 16:09:44.294149   93835 logs.go:284] 0 containers: []
	W0717 16:09:44.294165   93835 logs.go:286] No container was found matching "coredns"
	I0717 16:09:44.294238   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 16:09:44.314432   93835 logs.go:284] 0 containers: []
	W0717 16:09:44.314445   93835 logs.go:286] No container was found matching "kube-scheduler"
	I0717 16:09:44.314518   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 16:09:44.335961   93835 logs.go:284] 0 containers: []
	W0717 16:09:44.335974   93835 logs.go:286] No container was found matching "kube-proxy"
	I0717 16:09:44.336063   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 16:09:44.357288   93835 logs.go:284] 0 containers: []
	W0717 16:09:44.357302   93835 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 16:09:44.357380   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 16:09:44.379335   93835 logs.go:284] 0 containers: []
	W0717 16:09:44.379351   93835 logs.go:286] No container was found matching "kindnet"
	I0717 16:09:44.379432   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 16:09:44.404210   93835 logs.go:284] 0 containers: []
	W0717 16:09:44.404226   93835 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 16:09:44.404235   93835 logs.go:123] Gathering logs for describe nodes ...
	I0717 16:09:44.404244   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 16:09:44.468282   93835 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 16:09:44.468303   93835 logs.go:123] Gathering logs for Docker ...
	I0717 16:09:44.468310   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 16:09:44.485595   93835 logs.go:123] Gathering logs for container status ...
	I0717 16:09:44.485609   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 16:09:44.540446   93835 logs.go:123] Gathering logs for kubelet ...
	I0717 16:09:44.540459   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 16:09:44.586160   93835 logs.go:123] Gathering logs for dmesg ...
	I0717 16:09:44.586177   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 16:09:47.102172   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:09:47.118282   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 16:09:47.142725   93835 logs.go:284] 0 containers: []
	W0717 16:09:47.142740   93835 logs.go:286] No container was found matching "kube-apiserver"
	I0717 16:09:47.142812   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 16:09:47.161536   93835 logs.go:284] 0 containers: []
	W0717 16:09:47.161550   93835 logs.go:286] No container was found matching "etcd"
	I0717 16:09:47.161617   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 16:09:47.180655   93835 logs.go:284] 0 containers: []
	W0717 16:09:47.180670   93835 logs.go:286] No container was found matching "coredns"
	I0717 16:09:47.180738   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 16:09:47.206342   93835 logs.go:284] 0 containers: []
	W0717 16:09:47.206362   93835 logs.go:286] No container was found matching "kube-scheduler"
	I0717 16:09:47.206474   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 16:09:47.230920   93835 logs.go:284] 0 containers: []
	W0717 16:09:47.230937   93835 logs.go:286] No container was found matching "kube-proxy"
	I0717 16:09:47.231079   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 16:09:47.252295   93835 logs.go:284] 0 containers: []
	W0717 16:09:47.252308   93835 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 16:09:47.252376   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 16:09:47.272340   93835 logs.go:284] 0 containers: []
	W0717 16:09:47.272354   93835 logs.go:286] No container was found matching "kindnet"
	I0717 16:09:47.272425   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 16:09:47.294744   93835 logs.go:284] 0 containers: []
	W0717 16:09:47.294761   93835 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 16:09:47.294770   93835 logs.go:123] Gathering logs for Docker ...
	I0717 16:09:47.294780   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 16:09:47.317328   93835 logs.go:123] Gathering logs for container status ...
	I0717 16:09:47.317349   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 16:09:47.439765   93835 logs.go:123] Gathering logs for kubelet ...
	I0717 16:09:47.439781   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 16:09:47.482278   93835 logs.go:123] Gathering logs for dmesg ...
	I0717 16:09:47.482295   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 16:09:47.497672   93835 logs.go:123] Gathering logs for describe nodes ...
	I0717 16:09:47.497688   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 16:09:47.568574   93835 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 16:09:50.069527   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:09:50.081112   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 16:09:50.100236   93835 logs.go:284] 0 containers: []
	W0717 16:09:50.100249   93835 logs.go:286] No container was found matching "kube-apiserver"
	I0717 16:09:50.100322   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 16:09:50.120378   93835 logs.go:284] 0 containers: []
	W0717 16:09:50.120392   93835 logs.go:286] No container was found matching "etcd"
	I0717 16:09:50.120460   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 16:09:50.139167   93835 logs.go:284] 0 containers: []
	W0717 16:09:50.139181   93835 logs.go:286] No container was found matching "coredns"
	I0717 16:09:50.139248   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 16:09:50.158518   93835 logs.go:284] 0 containers: []
	W0717 16:09:50.158532   93835 logs.go:286] No container was found matching "kube-scheduler"
	I0717 16:09:50.158635   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 16:09:50.181221   93835 logs.go:284] 0 containers: []
	W0717 16:09:50.181234   93835 logs.go:286] No container was found matching "kube-proxy"
	I0717 16:09:50.181301   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 16:09:50.201527   93835 logs.go:284] 0 containers: []
	W0717 16:09:50.201540   93835 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 16:09:50.201608   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 16:09:50.222696   93835 logs.go:284] 0 containers: []
	W0717 16:09:50.222713   93835 logs.go:286] No container was found matching "kindnet"
	I0717 16:09:50.222804   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 16:09:50.253886   93835 logs.go:284] 0 containers: []
	W0717 16:09:50.253901   93835 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 16:09:50.253909   93835 logs.go:123] Gathering logs for Docker ...
	I0717 16:09:50.253921   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 16:09:50.271709   93835 logs.go:123] Gathering logs for container status ...
	I0717 16:09:50.271722   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 16:09:50.328207   93835 logs.go:123] Gathering logs for kubelet ...
	I0717 16:09:50.328222   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 16:09:50.372523   93835 logs.go:123] Gathering logs for dmesg ...
	I0717 16:09:50.372542   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 16:09:50.387189   93835 logs.go:123] Gathering logs for describe nodes ...
	I0717 16:09:50.387204   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 16:09:50.450229   93835 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 16:09:52.965843   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:09:52.977428   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 16:09:52.997719   93835 logs.go:284] 0 containers: []
	W0717 16:09:52.997736   93835 logs.go:286] No container was found matching "kube-apiserver"
	I0717 16:09:52.997811   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 16:09:53.020252   93835 logs.go:284] 0 containers: []
	W0717 16:09:53.020265   93835 logs.go:286] No container was found matching "etcd"
	I0717 16:09:53.020344   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 16:09:53.042586   93835 logs.go:284] 0 containers: []
	W0717 16:09:53.042602   93835 logs.go:286] No container was found matching "coredns"
	I0717 16:09:53.042680   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 16:09:53.063286   93835 logs.go:284] 0 containers: []
	W0717 16:09:53.063304   93835 logs.go:286] No container was found matching "kube-scheduler"
	I0717 16:09:53.063406   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 16:09:53.108192   93835 logs.go:284] 0 containers: []
	W0717 16:09:53.108215   93835 logs.go:286] No container was found matching "kube-proxy"
	I0717 16:09:53.108301   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 16:09:53.131810   93835 logs.go:284] 0 containers: []
	W0717 16:09:53.131826   93835 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 16:09:53.131900   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 16:09:53.153950   93835 logs.go:284] 0 containers: []
	W0717 16:09:53.153971   93835 logs.go:286] No container was found matching "kindnet"
	I0717 16:09:53.154051   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 16:09:53.175331   93835 logs.go:284] 0 containers: []
	W0717 16:09:53.175345   93835 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 16:09:53.175352   93835 logs.go:123] Gathering logs for Docker ...
	I0717 16:09:53.175359   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 16:09:53.191745   93835 logs.go:123] Gathering logs for container status ...
	I0717 16:09:53.191765   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 16:09:53.250191   93835 logs.go:123] Gathering logs for kubelet ...
	I0717 16:09:53.250206   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 16:09:53.294575   93835 logs.go:123] Gathering logs for dmesg ...
	I0717 16:09:53.294590   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 16:09:53.308800   93835 logs.go:123] Gathering logs for describe nodes ...
	I0717 16:09:53.308815   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 16:09:53.370659   93835 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 16:09:55.871820   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:09:55.883781   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 16:09:55.901743   93835 logs.go:284] 0 containers: []
	W0717 16:09:55.901756   93835 logs.go:286] No container was found matching "kube-apiserver"
	I0717 16:09:55.901828   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 16:09:55.921523   93835 logs.go:284] 0 containers: []
	W0717 16:09:55.921536   93835 logs.go:286] No container was found matching "etcd"
	I0717 16:09:55.921612   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 16:09:55.940247   93835 logs.go:284] 0 containers: []
	W0717 16:09:55.940260   93835 logs.go:286] No container was found matching "coredns"
	I0717 16:09:55.940344   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 16:09:55.961265   93835 logs.go:284] 0 containers: []
	W0717 16:09:55.961277   93835 logs.go:286] No container was found matching "kube-scheduler"
	I0717 16:09:55.961345   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 16:09:55.980708   93835 logs.go:284] 0 containers: []
	W0717 16:09:55.980720   93835 logs.go:286] No container was found matching "kube-proxy"
	I0717 16:09:55.980783   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 16:09:56.001425   93835 logs.go:284] 0 containers: []
	W0717 16:09:56.001440   93835 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 16:09:56.001524   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 16:09:56.021758   93835 logs.go:284] 0 containers: []
	W0717 16:09:56.021770   93835 logs.go:286] No container was found matching "kindnet"
	I0717 16:09:56.021835   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 16:09:56.042546   93835 logs.go:284] 0 containers: []
	W0717 16:09:56.042560   93835 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 16:09:56.042568   93835 logs.go:123] Gathering logs for kubelet ...
	I0717 16:09:56.042581   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 16:09:56.112845   93835 logs.go:123] Gathering logs for dmesg ...
	I0717 16:09:56.112862   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 16:09:56.128034   93835 logs.go:123] Gathering logs for describe nodes ...
	I0717 16:09:56.128052   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 16:09:56.185240   93835 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 16:09:56.185253   93835 logs.go:123] Gathering logs for Docker ...
	I0717 16:09:56.185275   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 16:09:56.201997   93835 logs.go:123] Gathering logs for container status ...
	I0717 16:09:56.202010   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 16:09:58.756104   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:09:58.767989   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 16:09:58.787461   93835 logs.go:284] 0 containers: []
	W0717 16:09:58.787476   93835 logs.go:286] No container was found matching "kube-apiserver"
	I0717 16:09:58.787547   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 16:09:58.807015   93835 logs.go:284] 0 containers: []
	W0717 16:09:58.807032   93835 logs.go:286] No container was found matching "etcd"
	I0717 16:09:58.807111   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 16:09:58.828949   93835 logs.go:284] 0 containers: []
	W0717 16:09:58.828961   93835 logs.go:286] No container was found matching "coredns"
	I0717 16:09:58.829034   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 16:09:58.847307   93835 logs.go:284] 0 containers: []
	W0717 16:09:58.847321   93835 logs.go:286] No container was found matching "kube-scheduler"
	I0717 16:09:58.847389   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 16:09:58.866847   93835 logs.go:284] 0 containers: []
	W0717 16:09:58.866858   93835 logs.go:286] No container was found matching "kube-proxy"
	I0717 16:09:58.866925   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 16:09:58.886780   93835 logs.go:284] 0 containers: []
	W0717 16:09:58.886793   93835 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 16:09:58.886869   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 16:09:58.906333   93835 logs.go:284] 0 containers: []
	W0717 16:09:58.906346   93835 logs.go:286] No container was found matching "kindnet"
	I0717 16:09:58.906410   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 16:09:58.926115   93835 logs.go:284] 0 containers: []
	W0717 16:09:58.926129   93835 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 16:09:58.926136   93835 logs.go:123] Gathering logs for describe nodes ...
	I0717 16:09:58.926146   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 16:09:58.983691   93835 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 16:09:58.983704   93835 logs.go:123] Gathering logs for Docker ...
	I0717 16:09:58.983711   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 16:09:59.000859   93835 logs.go:123] Gathering logs for container status ...
	I0717 16:09:59.000874   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 16:09:59.057440   93835 logs.go:123] Gathering logs for kubelet ...
	I0717 16:09:59.057454   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 16:09:59.130974   93835 logs.go:123] Gathering logs for dmesg ...
	I0717 16:09:59.130995   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 16:10:01.646175   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:10:01.658796   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 16:10:01.678043   93835 logs.go:284] 0 containers: []
	W0717 16:10:01.678056   93835 logs.go:286] No container was found matching "kube-apiserver"
	I0717 16:10:01.678125   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 16:10:01.697554   93835 logs.go:284] 0 containers: []
	W0717 16:10:01.697567   93835 logs.go:286] No container was found matching "etcd"
	I0717 16:10:01.697646   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 16:10:01.716138   93835 logs.go:284] 0 containers: []
	W0717 16:10:01.716153   93835 logs.go:286] No container was found matching "coredns"
	I0717 16:10:01.716229   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 16:10:01.735005   93835 logs.go:284] 0 containers: []
	W0717 16:10:01.735019   93835 logs.go:286] No container was found matching "kube-scheduler"
	I0717 16:10:01.735089   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 16:10:01.754578   93835 logs.go:284] 0 containers: []
	W0717 16:10:01.754591   93835 logs.go:286] No container was found matching "kube-proxy"
	I0717 16:10:01.754658   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 16:10:01.773630   93835 logs.go:284] 0 containers: []
	W0717 16:10:01.773643   93835 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 16:10:01.773723   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 16:10:01.794916   93835 logs.go:284] 0 containers: []
	W0717 16:10:01.794930   93835 logs.go:286] No container was found matching "kindnet"
	I0717 16:10:01.794996   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 16:10:01.815731   93835 logs.go:284] 0 containers: []
	W0717 16:10:01.815745   93835 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 16:10:01.815752   93835 logs.go:123] Gathering logs for Docker ...
	I0717 16:10:01.815760   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 16:10:01.833499   93835 logs.go:123] Gathering logs for container status ...
	I0717 16:10:01.833513   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 16:10:01.885375   93835 logs.go:123] Gathering logs for kubelet ...
	I0717 16:10:01.885391   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 16:10:01.926363   93835 logs.go:123] Gathering logs for dmesg ...
	I0717 16:10:01.926377   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 16:10:01.940671   93835 logs.go:123] Gathering logs for describe nodes ...
	I0717 16:10:01.940706   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 16:10:01.997088   93835 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 16:10:04.497929   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:10:04.510146   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 16:10:04.529693   93835 logs.go:284] 0 containers: []
	W0717 16:10:04.529707   93835 logs.go:286] No container was found matching "kube-apiserver"
	I0717 16:10:04.529775   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 16:10:04.548749   93835 logs.go:284] 0 containers: []
	W0717 16:10:04.548763   93835 logs.go:286] No container was found matching "etcd"
	I0717 16:10:04.548831   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 16:10:04.569742   93835 logs.go:284] 0 containers: []
	W0717 16:10:04.569755   93835 logs.go:286] No container was found matching "coredns"
	I0717 16:10:04.569824   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 16:10:04.589691   93835 logs.go:284] 0 containers: []
	W0717 16:10:04.589705   93835 logs.go:286] No container was found matching "kube-scheduler"
	I0717 16:10:04.589776   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 16:10:04.609423   93835 logs.go:284] 0 containers: []
	W0717 16:10:04.609439   93835 logs.go:286] No container was found matching "kube-proxy"
	I0717 16:10:04.609517   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 16:10:04.632291   93835 logs.go:284] 0 containers: []
	W0717 16:10:04.632306   93835 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 16:10:04.632413   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 16:10:04.651824   93835 logs.go:284] 0 containers: []
	W0717 16:10:04.651837   93835 logs.go:286] No container was found matching "kindnet"
	I0717 16:10:04.651904   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 16:10:04.671556   93835 logs.go:284] 0 containers: []
	W0717 16:10:04.671569   93835 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 16:10:04.671577   93835 logs.go:123] Gathering logs for describe nodes ...
	I0717 16:10:04.671584   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 16:10:04.744798   93835 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 16:10:04.744816   93835 logs.go:123] Gathering logs for Docker ...
	I0717 16:10:04.744826   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 16:10:04.764525   93835 logs.go:123] Gathering logs for container status ...
	I0717 16:10:04.764536   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 16:10:04.827166   93835 logs.go:123] Gathering logs for kubelet ...
	I0717 16:10:04.827180   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 16:10:04.868996   93835 logs.go:123] Gathering logs for dmesg ...
	I0717 16:10:04.869015   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 16:10:07.383865   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:10:07.396539   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 16:10:07.416598   93835 logs.go:284] 0 containers: []
	W0717 16:10:07.416611   93835 logs.go:286] No container was found matching "kube-apiserver"
	I0717 16:10:07.416676   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 16:10:07.435997   93835 logs.go:284] 0 containers: []
	W0717 16:10:07.436011   93835 logs.go:286] No container was found matching "etcd"
	I0717 16:10:07.436079   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 16:10:07.455335   93835 logs.go:284] 0 containers: []
	W0717 16:10:07.455351   93835 logs.go:286] No container was found matching "coredns"
	I0717 16:10:07.455427   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 16:10:07.474238   93835 logs.go:284] 0 containers: []
	W0717 16:10:07.474250   93835 logs.go:286] No container was found matching "kube-scheduler"
	I0717 16:10:07.474327   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 16:10:07.493982   93835 logs.go:284] 0 containers: []
	W0717 16:10:07.493995   93835 logs.go:286] No container was found matching "kube-proxy"
	I0717 16:10:07.494063   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 16:10:07.516028   93835 logs.go:284] 0 containers: []
	W0717 16:10:07.516041   93835 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 16:10:07.516110   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 16:10:07.535756   93835 logs.go:284] 0 containers: []
	W0717 16:10:07.535770   93835 logs.go:286] No container was found matching "kindnet"
	I0717 16:10:07.535856   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 16:10:07.557072   93835 logs.go:284] 0 containers: []
	W0717 16:10:07.557088   93835 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 16:10:07.557100   93835 logs.go:123] Gathering logs for Docker ...
	I0717 16:10:07.557111   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 16:10:07.574529   93835 logs.go:123] Gathering logs for container status ...
	I0717 16:10:07.574545   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 16:10:07.632349   93835 logs.go:123] Gathering logs for kubelet ...
	I0717 16:10:07.632364   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 16:10:07.675445   93835 logs.go:123] Gathering logs for dmesg ...
	I0717 16:10:07.675461   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 16:10:07.690442   93835 logs.go:123] Gathering logs for describe nodes ...
	I0717 16:10:07.690457   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 16:10:07.757364   93835 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 16:10:10.259529   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:10:10.271533   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 16:10:10.294140   93835 logs.go:284] 0 containers: []
	W0717 16:10:10.294158   93835 logs.go:286] No container was found matching "kube-apiserver"
	I0717 16:10:10.294229   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 16:10:10.314762   93835 logs.go:284] 0 containers: []
	W0717 16:10:10.314777   93835 logs.go:286] No container was found matching "etcd"
	I0717 16:10:10.314857   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 16:10:10.335381   93835 logs.go:284] 0 containers: []
	W0717 16:10:10.335398   93835 logs.go:286] No container was found matching "coredns"
	I0717 16:10:10.335477   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 16:10:10.354586   93835 logs.go:284] 0 containers: []
	W0717 16:10:10.354598   93835 logs.go:286] No container was found matching "kube-scheduler"
	I0717 16:10:10.354664   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 16:10:10.373980   93835 logs.go:284] 0 containers: []
	W0717 16:10:10.373994   93835 logs.go:286] No container was found matching "kube-proxy"
	I0717 16:10:10.374064   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 16:10:10.394020   93835 logs.go:284] 0 containers: []
	W0717 16:10:10.394033   93835 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 16:10:10.394101   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 16:10:10.412583   93835 logs.go:284] 0 containers: []
	W0717 16:10:10.412597   93835 logs.go:286] No container was found matching "kindnet"
	I0717 16:10:10.412668   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 16:10:10.432277   93835 logs.go:284] 0 containers: []
	W0717 16:10:10.459848   93835 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 16:10:10.459858   93835 logs.go:123] Gathering logs for kubelet ...
	I0717 16:10:10.459867   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 16:10:10.500820   93835 logs.go:123] Gathering logs for dmesg ...
	I0717 16:10:10.500834   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 16:10:10.515428   93835 logs.go:123] Gathering logs for describe nodes ...
	I0717 16:10:10.515442   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 16:10:10.574258   93835 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 16:10:10.574271   93835 logs.go:123] Gathering logs for Docker ...
	I0717 16:10:10.574278   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 16:10:10.590689   93835 logs.go:123] Gathering logs for container status ...
	I0717 16:10:10.590702   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 16:10:13.143964   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:10:13.155189   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 16:10:13.175077   93835 logs.go:284] 0 containers: []
	W0717 16:10:13.175091   93835 logs.go:286] No container was found matching "kube-apiserver"
	I0717 16:10:13.175162   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 16:10:13.194254   93835 logs.go:284] 0 containers: []
	W0717 16:10:13.194267   93835 logs.go:286] No container was found matching "etcd"
	I0717 16:10:13.194339   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 16:10:13.215931   93835 logs.go:284] 0 containers: []
	W0717 16:10:13.215945   93835 logs.go:286] No container was found matching "coredns"
	I0717 16:10:13.216019   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 16:10:13.238121   93835 logs.go:284] 0 containers: []
	W0717 16:10:13.238139   93835 logs.go:286] No container was found matching "kube-scheduler"
	I0717 16:10:13.238218   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 16:10:13.259359   93835 logs.go:284] 0 containers: []
	W0717 16:10:13.259382   93835 logs.go:286] No container was found matching "kube-proxy"
	I0717 16:10:13.259461   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 16:10:13.279811   93835 logs.go:284] 0 containers: []
	W0717 16:10:13.279827   93835 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 16:10:13.279903   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 16:10:13.301138   93835 logs.go:284] 0 containers: []
	W0717 16:10:13.301159   93835 logs.go:286] No container was found matching "kindnet"
	I0717 16:10:13.301246   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 16:10:13.321948   93835 logs.go:284] 0 containers: []
	W0717 16:10:13.321966   93835 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 16:10:13.321977   93835 logs.go:123] Gathering logs for Docker ...
	I0717 16:10:13.321988   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 16:10:13.343945   93835 logs.go:123] Gathering logs for container status ...
	I0717 16:10:13.343961   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 16:10:13.400526   93835 logs.go:123] Gathering logs for kubelet ...
	I0717 16:10:13.400540   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 16:10:13.444674   93835 logs.go:123] Gathering logs for dmesg ...
	I0717 16:10:13.444695   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 16:10:13.460058   93835 logs.go:123] Gathering logs for describe nodes ...
	I0717 16:10:13.460073   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 16:10:13.526371   93835 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 16:10:16.026651   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:10:16.038744   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 16:10:16.061201   93835 logs.go:284] 0 containers: []
	W0717 16:10:16.061214   93835 logs.go:286] No container was found matching "kube-apiserver"
	I0717 16:10:16.061284   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 16:10:16.080543   93835 logs.go:284] 0 containers: []
	W0717 16:10:16.080556   93835 logs.go:286] No container was found matching "etcd"
	I0717 16:10:16.080629   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 16:10:16.107223   93835 logs.go:284] 0 containers: []
	W0717 16:10:16.107239   93835 logs.go:286] No container was found matching "coredns"
	I0717 16:10:16.107313   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 16:10:16.127596   93835 logs.go:284] 0 containers: []
	W0717 16:10:16.127611   93835 logs.go:286] No container was found matching "kube-scheduler"
	I0717 16:10:16.127689   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 16:10:16.149213   93835 logs.go:284] 0 containers: []
	W0717 16:10:16.149228   93835 logs.go:286] No container was found matching "kube-proxy"
	I0717 16:10:16.149308   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 16:10:16.170995   93835 logs.go:284] 0 containers: []
	W0717 16:10:16.171008   93835 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 16:10:16.171078   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 16:10:16.191740   93835 logs.go:284] 0 containers: []
	W0717 16:10:16.191760   93835 logs.go:286] No container was found matching "kindnet"
	I0717 16:10:16.191837   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 16:10:16.214535   93835 logs.go:284] 0 containers: []
	W0717 16:10:16.214566   93835 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 16:10:16.214610   93835 logs.go:123] Gathering logs for kubelet ...
	I0717 16:10:16.214623   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 16:10:16.262030   93835 logs.go:123] Gathering logs for dmesg ...
	I0717 16:10:16.262052   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 16:10:16.277400   93835 logs.go:123] Gathering logs for describe nodes ...
	I0717 16:10:16.277420   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 16:10:16.354135   93835 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 16:10:16.354150   93835 logs.go:123] Gathering logs for Docker ...
	I0717 16:10:16.354158   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 16:10:16.371997   93835 logs.go:123] Gathering logs for container status ...
	I0717 16:10:16.372011   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 16:10:18.933578   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:10:18.947706   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 16:10:18.967023   93835 logs.go:284] 0 containers: []
	W0717 16:10:18.967036   93835 logs.go:286] No container was found matching "kube-apiserver"
	I0717 16:10:18.967105   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 16:10:18.986542   93835 logs.go:284] 0 containers: []
	W0717 16:10:18.986554   93835 logs.go:286] No container was found matching "etcd"
	I0717 16:10:18.986639   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 16:10:19.010180   93835 logs.go:284] 0 containers: []
	W0717 16:10:19.010192   93835 logs.go:286] No container was found matching "coredns"
	I0717 16:10:19.010260   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 16:10:19.032735   93835 logs.go:284] 0 containers: []
	W0717 16:10:19.032749   93835 logs.go:286] No container was found matching "kube-scheduler"
	I0717 16:10:19.032827   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 16:10:19.055954   93835 logs.go:284] 0 containers: []
	W0717 16:10:19.055969   93835 logs.go:286] No container was found matching "kube-proxy"
	I0717 16:10:19.056045   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 16:10:19.075993   93835 logs.go:284] 0 containers: []
	W0717 16:10:19.076006   93835 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 16:10:19.076076   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 16:10:19.099060   93835 logs.go:284] 0 containers: []
	W0717 16:10:19.099077   93835 logs.go:286] No container was found matching "kindnet"
	I0717 16:10:19.099166   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 16:10:19.122180   93835 logs.go:284] 0 containers: []
	W0717 16:10:19.122203   93835 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 16:10:19.122213   93835 logs.go:123] Gathering logs for kubelet ...
	I0717 16:10:19.122223   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 16:10:19.167493   93835 logs.go:123] Gathering logs for dmesg ...
	I0717 16:10:19.167510   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 16:10:19.181772   93835 logs.go:123] Gathering logs for describe nodes ...
	I0717 16:10:19.181786   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 16:10:19.251439   93835 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 16:10:19.251453   93835 logs.go:123] Gathering logs for Docker ...
	I0717 16:10:19.251460   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 16:10:19.270153   93835 logs.go:123] Gathering logs for container status ...
	I0717 16:10:19.270169   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 16:10:21.832237   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:10:21.844373   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 16:10:21.863401   93835 logs.go:284] 0 containers: []
	W0717 16:10:21.863415   93835 logs.go:286] No container was found matching "kube-apiserver"
	I0717 16:10:21.863490   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 16:10:21.883627   93835 logs.go:284] 0 containers: []
	W0717 16:10:21.883640   93835 logs.go:286] No container was found matching "etcd"
	I0717 16:10:21.883716   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 16:10:21.903206   93835 logs.go:284] 0 containers: []
	W0717 16:10:21.903219   93835 logs.go:286] No container was found matching "coredns"
	I0717 16:10:21.903294   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 16:10:21.923957   93835 logs.go:284] 0 containers: []
	W0717 16:10:21.923970   93835 logs.go:286] No container was found matching "kube-scheduler"
	I0717 16:10:21.924039   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 16:10:21.944356   93835 logs.go:284] 0 containers: []
	W0717 16:10:21.944371   93835 logs.go:286] No container was found matching "kube-proxy"
	I0717 16:10:21.944443   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 16:10:21.967263   93835 logs.go:284] 0 containers: []
	W0717 16:10:21.967281   93835 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 16:10:21.967362   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 16:10:21.990069   93835 logs.go:284] 0 containers: []
	W0717 16:10:21.990083   93835 logs.go:286] No container was found matching "kindnet"
	I0717 16:10:21.990152   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 16:10:22.009959   93835 logs.go:284] 0 containers: []
	W0717 16:10:22.009974   93835 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 16:10:22.009981   93835 logs.go:123] Gathering logs for kubelet ...
	I0717 16:10:22.009988   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 16:10:22.053324   93835 logs.go:123] Gathering logs for dmesg ...
	I0717 16:10:22.053339   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 16:10:22.068347   93835 logs.go:123] Gathering logs for describe nodes ...
	I0717 16:10:22.068364   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 16:10:22.127189   93835 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 16:10:22.127214   93835 logs.go:123] Gathering logs for Docker ...
	I0717 16:10:22.127241   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 16:10:22.143932   93835 logs.go:123] Gathering logs for container status ...
	I0717 16:10:22.143961   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 16:10:24.697584   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:10:24.709913   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 16:10:24.730012   93835 logs.go:284] 0 containers: []
	W0717 16:10:24.730025   93835 logs.go:286] No container was found matching "kube-apiserver"
	I0717 16:10:24.730100   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 16:10:24.750676   93835 logs.go:284] 0 containers: []
	W0717 16:10:24.750690   93835 logs.go:286] No container was found matching "etcd"
	I0717 16:10:24.750764   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 16:10:24.771285   93835 logs.go:284] 0 containers: []
	W0717 16:10:24.771300   93835 logs.go:286] No container was found matching "coredns"
	I0717 16:10:24.771371   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 16:10:24.791175   93835 logs.go:284] 0 containers: []
	W0717 16:10:24.791189   93835 logs.go:286] No container was found matching "kube-scheduler"
	I0717 16:10:24.791262   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 16:10:24.811294   93835 logs.go:284] 0 containers: []
	W0717 16:10:24.811307   93835 logs.go:286] No container was found matching "kube-proxy"
	I0717 16:10:24.811374   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 16:10:24.831998   93835 logs.go:284] 0 containers: []
	W0717 16:10:24.832012   93835 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 16:10:24.832077   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 16:10:24.853089   93835 logs.go:284] 0 containers: []
	W0717 16:10:24.853104   93835 logs.go:286] No container was found matching "kindnet"
	I0717 16:10:24.853173   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 16:10:24.873729   93835 logs.go:284] 0 containers: []
	W0717 16:10:24.873742   93835 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 16:10:24.873749   93835 logs.go:123] Gathering logs for kubelet ...
	I0717 16:10:24.873756   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 16:10:24.914663   93835 logs.go:123] Gathering logs for dmesg ...
	I0717 16:10:24.914678   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 16:10:24.928782   93835 logs.go:123] Gathering logs for describe nodes ...
	I0717 16:10:24.928798   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 16:10:24.987041   93835 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 16:10:24.987053   93835 logs.go:123] Gathering logs for Docker ...
	I0717 16:10:24.987060   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 16:10:25.003194   93835 logs.go:123] Gathering logs for container status ...
	I0717 16:10:25.003208   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 16:10:27.556042   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:10:27.567626   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 16:10:27.587464   93835 logs.go:284] 0 containers: []
	W0717 16:10:27.587476   93835 logs.go:286] No container was found matching "kube-apiserver"
	I0717 16:10:27.587544   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 16:10:27.608688   93835 logs.go:284] 0 containers: []
	W0717 16:10:27.608701   93835 logs.go:286] No container was found matching "etcd"
	I0717 16:10:27.608770   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 16:10:27.629242   93835 logs.go:284] 0 containers: []
	W0717 16:10:27.629256   93835 logs.go:286] No container was found matching "coredns"
	I0717 16:10:27.629326   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 16:10:27.648860   93835 logs.go:284] 0 containers: []
	W0717 16:10:27.648871   93835 logs.go:286] No container was found matching "kube-scheduler"
	I0717 16:10:27.648937   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 16:10:27.667875   93835 logs.go:284] 0 containers: []
	W0717 16:10:27.667889   93835 logs.go:286] No container was found matching "kube-proxy"
	I0717 16:10:27.667958   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 16:10:27.687277   93835 logs.go:284] 0 containers: []
	W0717 16:10:27.687291   93835 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 16:10:27.687361   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 16:10:27.706790   93835 logs.go:284] 0 containers: []
	W0717 16:10:27.706803   93835 logs.go:286] No container was found matching "kindnet"
	I0717 16:10:27.706873   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 16:10:27.726916   93835 logs.go:284] 0 containers: []
	W0717 16:10:27.726929   93835 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 16:10:27.726935   93835 logs.go:123] Gathering logs for Docker ...
	I0717 16:10:27.726943   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 16:10:27.743312   93835 logs.go:123] Gathering logs for container status ...
	I0717 16:10:27.743325   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 16:10:27.796263   93835 logs.go:123] Gathering logs for kubelet ...
	I0717 16:10:27.796278   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 16:10:27.838681   93835 logs.go:123] Gathering logs for dmesg ...
	I0717 16:10:27.838697   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 16:10:27.853021   93835 logs.go:123] Gathering logs for describe nodes ...
	I0717 16:10:27.853038   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 16:10:27.911553   93835 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 16:10:30.411910   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:10:30.424441   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 16:10:30.443946   93835 logs.go:284] 0 containers: []
	W0717 16:10:30.449911   93835 logs.go:286] No container was found matching "kube-apiserver"
	I0717 16:10:30.449979   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 16:10:30.468524   93835 logs.go:284] 0 containers: []
	W0717 16:10:30.468541   93835 logs.go:286] No container was found matching "etcd"
	I0717 16:10:30.468622   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 16:10:30.487215   93835 logs.go:284] 0 containers: []
	W0717 16:10:30.487229   93835 logs.go:286] No container was found matching "coredns"
	I0717 16:10:30.487299   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 16:10:30.507969   93835 logs.go:284] 0 containers: []
	W0717 16:10:30.507984   93835 logs.go:286] No container was found matching "kube-scheduler"
	I0717 16:10:30.508057   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 16:10:30.529211   93835 logs.go:284] 0 containers: []
	W0717 16:10:30.529226   93835 logs.go:286] No container was found matching "kube-proxy"
	I0717 16:10:30.529296   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 16:10:30.551043   93835 logs.go:284] 0 containers: []
	W0717 16:10:30.551055   93835 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 16:10:30.551126   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 16:10:30.604676   93835 logs.go:284] 0 containers: []
	W0717 16:10:30.604691   93835 logs.go:286] No container was found matching "kindnet"
	I0717 16:10:30.604758   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 16:10:30.626280   93835 logs.go:284] 0 containers: []
	W0717 16:10:30.626293   93835 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 16:10:30.626301   93835 logs.go:123] Gathering logs for kubelet ...
	I0717 16:10:30.626309   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 16:10:30.667259   93835 logs.go:123] Gathering logs for dmesg ...
	I0717 16:10:30.667272   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 16:10:30.681367   93835 logs.go:123] Gathering logs for describe nodes ...
	I0717 16:10:30.681410   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 16:10:30.740362   93835 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 16:10:30.740388   93835 logs.go:123] Gathering logs for Docker ...
	I0717 16:10:30.740416   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 16:10:30.757493   93835 logs.go:123] Gathering logs for container status ...
	I0717 16:10:30.757508   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 16:10:33.311802   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:10:33.322222   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 16:10:33.342989   93835 logs.go:284] 0 containers: []
	W0717 16:10:33.343000   93835 logs.go:286] No container was found matching "kube-apiserver"
	I0717 16:10:33.343073   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 16:10:33.362994   93835 logs.go:284] 0 containers: []
	W0717 16:10:33.363005   93835 logs.go:286] No container was found matching "etcd"
	I0717 16:10:33.363073   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 16:10:33.383252   93835 logs.go:284] 0 containers: []
	W0717 16:10:33.383265   93835 logs.go:286] No container was found matching "coredns"
	I0717 16:10:33.383332   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 16:10:33.404444   93835 logs.go:284] 0 containers: []
	W0717 16:10:33.404457   93835 logs.go:286] No container was found matching "kube-scheduler"
	I0717 16:10:33.404524   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 16:10:33.424124   93835 logs.go:284] 0 containers: []
	W0717 16:10:33.424139   93835 logs.go:286] No container was found matching "kube-proxy"
	I0717 16:10:33.424214   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 16:10:33.444680   93835 logs.go:284] 0 containers: []
	W0717 16:10:33.444694   93835 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 16:10:33.444770   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 16:10:33.466880   93835 logs.go:284] 0 containers: []
	W0717 16:10:33.466895   93835 logs.go:286] No container was found matching "kindnet"
	I0717 16:10:33.466966   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 16:10:33.491969   93835 logs.go:284] 0 containers: []
	W0717 16:10:33.491983   93835 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 16:10:33.491990   93835 logs.go:123] Gathering logs for describe nodes ...
	I0717 16:10:33.491998   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 16:10:33.559943   93835 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 16:10:33.559956   93835 logs.go:123] Gathering logs for Docker ...
	I0717 16:10:33.559966   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 16:10:33.597357   93835 logs.go:123] Gathering logs for container status ...
	I0717 16:10:33.597373   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 16:10:33.656217   93835 logs.go:123] Gathering logs for kubelet ...
	I0717 16:10:33.656233   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 16:10:33.697737   93835 logs.go:123] Gathering logs for dmesg ...
	I0717 16:10:33.697754   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 16:10:36.217767   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:10:36.230138   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 16:10:36.249114   93835 logs.go:284] 0 containers: []
	W0717 16:10:36.249127   93835 logs.go:286] No container was found matching "kube-apiserver"
	I0717 16:10:36.249195   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 16:10:36.268924   93835 logs.go:284] 0 containers: []
	W0717 16:10:36.268935   93835 logs.go:286] No container was found matching "etcd"
	I0717 16:10:36.269004   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 16:10:36.287853   93835 logs.go:284] 0 containers: []
	W0717 16:10:36.287865   93835 logs.go:286] No container was found matching "coredns"
	I0717 16:10:36.287930   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 16:10:36.308027   93835 logs.go:284] 0 containers: []
	W0717 16:10:36.308040   93835 logs.go:286] No container was found matching "kube-scheduler"
	I0717 16:10:36.308109   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 16:10:36.328104   93835 logs.go:284] 0 containers: []
	W0717 16:10:36.328138   93835 logs.go:286] No container was found matching "kube-proxy"
	I0717 16:10:36.328252   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 16:10:36.348565   93835 logs.go:284] 0 containers: []
	W0717 16:10:36.348579   93835 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 16:10:36.348655   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 16:10:36.367824   93835 logs.go:284] 0 containers: []
	W0717 16:10:36.367838   93835 logs.go:286] No container was found matching "kindnet"
	I0717 16:10:36.367908   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 16:10:36.387144   93835 logs.go:284] 0 containers: []
	W0717 16:10:36.387157   93835 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 16:10:36.387164   93835 logs.go:123] Gathering logs for kubelet ...
	I0717 16:10:36.387170   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 16:10:36.428464   93835 logs.go:123] Gathering logs for dmesg ...
	I0717 16:10:36.428482   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 16:10:36.442773   93835 logs.go:123] Gathering logs for describe nodes ...
	I0717 16:10:36.442786   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 16:10:36.500305   93835 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 16:10:36.500333   93835 logs.go:123] Gathering logs for Docker ...
	I0717 16:10:36.500361   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 16:10:36.517728   93835 logs.go:123] Gathering logs for container status ...
	I0717 16:10:36.517745   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 16:10:39.098851   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:10:39.110772   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 16:10:39.129185   93835 logs.go:284] 0 containers: []
	W0717 16:10:39.129199   93835 logs.go:286] No container was found matching "kube-apiserver"
	I0717 16:10:39.129268   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 16:10:39.148179   93835 logs.go:284] 0 containers: []
	W0717 16:10:39.148191   93835 logs.go:286] No container was found matching "etcd"
	I0717 16:10:39.148260   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 16:10:39.166865   93835 logs.go:284] 0 containers: []
	W0717 16:10:39.166878   93835 logs.go:286] No container was found matching "coredns"
	I0717 16:10:39.166955   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 16:10:39.187232   93835 logs.go:284] 0 containers: []
	W0717 16:10:39.187246   93835 logs.go:286] No container was found matching "kube-scheduler"
	I0717 16:10:39.187314   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 16:10:39.207638   93835 logs.go:284] 0 containers: []
	W0717 16:10:39.207656   93835 logs.go:286] No container was found matching "kube-proxy"
	I0717 16:10:39.207747   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 16:10:39.228603   93835 logs.go:284] 0 containers: []
	W0717 16:10:39.228616   93835 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 16:10:39.228689   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 16:10:39.249050   93835 logs.go:284] 0 containers: []
	W0717 16:10:39.249064   93835 logs.go:286] No container was found matching "kindnet"
	I0717 16:10:39.249137   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 16:10:39.268955   93835 logs.go:284] 0 containers: []
	W0717 16:10:39.268967   93835 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 16:10:39.268974   93835 logs.go:123] Gathering logs for describe nodes ...
	I0717 16:10:39.268981   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 16:10:39.326857   93835 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 16:10:39.326869   93835 logs.go:123] Gathering logs for Docker ...
	I0717 16:10:39.326876   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 16:10:39.343023   93835 logs.go:123] Gathering logs for container status ...
	I0717 16:10:39.343037   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 16:10:39.395893   93835 logs.go:123] Gathering logs for kubelet ...
	I0717 16:10:39.395906   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 16:10:39.437247   93835 logs.go:123] Gathering logs for dmesg ...
	I0717 16:10:39.437268   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 16:10:41.956757   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:10:41.969174   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 16:10:41.987852   93835 logs.go:284] 0 containers: []
	W0717 16:10:41.987865   93835 logs.go:286] No container was found matching "kube-apiserver"
	I0717 16:10:41.987935   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 16:10:42.008492   93835 logs.go:284] 0 containers: []
	W0717 16:10:42.008505   93835 logs.go:286] No container was found matching "etcd"
	I0717 16:10:42.008574   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 16:10:42.028014   93835 logs.go:284] 0 containers: []
	W0717 16:10:42.028026   93835 logs.go:286] No container was found matching "coredns"
	I0717 16:10:42.028095   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 16:10:42.048552   93835 logs.go:284] 0 containers: []
	W0717 16:10:42.048568   93835 logs.go:286] No container was found matching "kube-scheduler"
	I0717 16:10:42.048653   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 16:10:42.067162   93835 logs.go:284] 0 containers: []
	W0717 16:10:42.067175   93835 logs.go:286] No container was found matching "kube-proxy"
	I0717 16:10:42.067243   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 16:10:42.086634   93835 logs.go:284] 0 containers: []
	W0717 16:10:42.086647   93835 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 16:10:42.086725   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 16:10:42.106332   93835 logs.go:284] 0 containers: []
	W0717 16:10:42.106346   93835 logs.go:286] No container was found matching "kindnet"
	I0717 16:10:42.106411   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 16:10:42.126633   93835 logs.go:284] 0 containers: []
	W0717 16:10:42.126646   93835 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 16:10:42.126653   93835 logs.go:123] Gathering logs for kubelet ...
	I0717 16:10:42.126663   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 16:10:42.168227   93835 logs.go:123] Gathering logs for dmesg ...
	I0717 16:10:42.168241   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 16:10:42.182332   93835 logs.go:123] Gathering logs for describe nodes ...
	I0717 16:10:42.182362   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 16:10:42.239892   93835 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 16:10:42.239906   93835 logs.go:123] Gathering logs for Docker ...
	I0717 16:10:42.239916   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 16:10:42.256211   93835 logs.go:123] Gathering logs for container status ...
	I0717 16:10:42.256224   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 16:10:44.808911   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:10:44.829668   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 16:10:44.850398   93835 logs.go:284] 0 containers: []
	W0717 16:10:44.850411   93835 logs.go:286] No container was found matching "kube-apiserver"
	I0717 16:10:44.850476   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 16:10:44.871659   93835 logs.go:284] 0 containers: []
	W0717 16:10:44.871673   93835 logs.go:286] No container was found matching "etcd"
	I0717 16:10:44.871740   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 16:10:44.890585   93835 logs.go:284] 0 containers: []
	W0717 16:10:44.890598   93835 logs.go:286] No container was found matching "coredns"
	I0717 16:10:44.890666   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 16:10:44.911978   93835 logs.go:284] 0 containers: []
	W0717 16:10:44.911990   93835 logs.go:286] No container was found matching "kube-scheduler"
	I0717 16:10:44.912046   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 16:10:44.934710   93835 logs.go:284] 0 containers: []
	W0717 16:10:44.934725   93835 logs.go:286] No container was found matching "kube-proxy"
	I0717 16:10:44.934789   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 16:10:44.955886   93835 logs.go:284] 0 containers: []
	W0717 16:10:44.955900   93835 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 16:10:44.955963   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 16:10:44.977394   93835 logs.go:284] 0 containers: []
	W0717 16:10:44.977407   93835 logs.go:286] No container was found matching "kindnet"
	I0717 16:10:44.977475   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 16:10:44.999446   93835 logs.go:284] 0 containers: []
	W0717 16:10:44.999458   93835 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 16:10:44.999465   93835 logs.go:123] Gathering logs for kubelet ...
	I0717 16:10:44.999474   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 16:10:45.045012   93835 logs.go:123] Gathering logs for dmesg ...
	I0717 16:10:45.045030   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 16:10:45.060312   93835 logs.go:123] Gathering logs for describe nodes ...
	I0717 16:10:45.060330   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 16:10:45.122315   93835 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 16:10:45.122329   93835 logs.go:123] Gathering logs for Docker ...
	I0717 16:10:45.122336   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 16:10:45.138487   93835 logs.go:123] Gathering logs for container status ...
	I0717 16:10:45.138499   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 16:10:47.698977   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:10:47.711225   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 16:10:47.730718   93835 logs.go:284] 0 containers: []
	W0717 16:10:47.730730   93835 logs.go:286] No container was found matching "kube-apiserver"
	I0717 16:10:47.730839   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 16:10:47.750638   93835 logs.go:284] 0 containers: []
	W0717 16:10:47.750651   93835 logs.go:286] No container was found matching "etcd"
	I0717 16:10:47.750723   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 16:10:47.771438   93835 logs.go:284] 0 containers: []
	W0717 16:10:47.771452   93835 logs.go:286] No container was found matching "coredns"
	I0717 16:10:47.771524   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 16:10:47.792644   93835 logs.go:284] 0 containers: []
	W0717 16:10:47.792658   93835 logs.go:286] No container was found matching "kube-scheduler"
	I0717 16:10:47.792731   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 16:10:47.813256   93835 logs.go:284] 0 containers: []
	W0717 16:10:47.813269   93835 logs.go:286] No container was found matching "kube-proxy"
	I0717 16:10:47.813345   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 16:10:47.832241   93835 logs.go:284] 0 containers: []
	W0717 16:10:47.832255   93835 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 16:10:47.832323   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 16:10:47.852377   93835 logs.go:284] 0 containers: []
	W0717 16:10:47.852390   93835 logs.go:286] No container was found matching "kindnet"
	I0717 16:10:47.852460   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 16:10:47.873676   93835 logs.go:284] 0 containers: []
	W0717 16:10:47.873688   93835 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 16:10:47.873695   93835 logs.go:123] Gathering logs for describe nodes ...
	I0717 16:10:47.873708   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 16:10:47.929530   93835 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 16:10:47.929573   93835 logs.go:123] Gathering logs for Docker ...
	I0717 16:10:47.929581   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 16:10:47.946139   93835 logs.go:123] Gathering logs for container status ...
	I0717 16:10:47.946152   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 16:10:47.998286   93835 logs.go:123] Gathering logs for kubelet ...
	I0717 16:10:47.998299   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 16:10:48.040170   93835 logs.go:123] Gathering logs for dmesg ...
	I0717 16:10:48.040185   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 16:10:50.554547   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:10:50.564554   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 16:10:50.584366   93835 logs.go:284] 0 containers: []
	W0717 16:10:50.584381   93835 logs.go:286] No container was found matching "kube-apiserver"
	I0717 16:10:50.584450   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 16:10:50.604191   93835 logs.go:284] 0 containers: []
	W0717 16:10:50.604238   93835 logs.go:286] No container was found matching "etcd"
	I0717 16:10:50.604303   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 16:10:50.625875   93835 logs.go:284] 0 containers: []
	W0717 16:10:50.625888   93835 logs.go:286] No container was found matching "coredns"
	I0717 16:10:50.625948   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 16:10:50.647327   93835 logs.go:284] 0 containers: []
	W0717 16:10:50.647340   93835 logs.go:286] No container was found matching "kube-scheduler"
	I0717 16:10:50.647414   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 16:10:50.667716   93835 logs.go:284] 0 containers: []
	W0717 16:10:50.667729   93835 logs.go:286] No container was found matching "kube-proxy"
	I0717 16:10:50.667816   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 16:10:50.687381   93835 logs.go:284] 0 containers: []
	W0717 16:10:50.687394   93835 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 16:10:50.687465   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 16:10:50.709873   93835 logs.go:284] 0 containers: []
	W0717 16:10:50.709885   93835 logs.go:286] No container was found matching "kindnet"
	I0717 16:10:50.709954   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 16:10:50.730732   93835 logs.go:284] 0 containers: []
	W0717 16:10:50.730753   93835 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 16:10:50.730767   93835 logs.go:123] Gathering logs for container status ...
	I0717 16:10:50.730781   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 16:10:50.791974   93835 logs.go:123] Gathering logs for kubelet ...
	I0717 16:10:50.791993   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 16:10:50.838164   93835 logs.go:123] Gathering logs for dmesg ...
	I0717 16:10:50.838186   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 16:10:50.854855   93835 logs.go:123] Gathering logs for describe nodes ...
	I0717 16:10:50.854872   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 16:10:50.918143   93835 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 16:10:50.918157   93835 logs.go:123] Gathering logs for Docker ...
	I0717 16:10:50.918164   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 16:10:53.435419   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:10:53.449629   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 16:10:53.470900   93835 logs.go:284] 0 containers: []
	W0717 16:10:53.470913   93835 logs.go:286] No container was found matching "kube-apiserver"
	I0717 16:10:53.470982   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 16:10:53.491478   93835 logs.go:284] 0 containers: []
	W0717 16:10:53.491491   93835 logs.go:286] No container was found matching "etcd"
	I0717 16:10:53.491571   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 16:10:53.512355   93835 logs.go:284] 0 containers: []
	W0717 16:10:53.512370   93835 logs.go:286] No container was found matching "coredns"
	I0717 16:10:53.512442   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 16:10:53.533032   93835 logs.go:284] 0 containers: []
	W0717 16:10:53.533046   93835 logs.go:286] No container was found matching "kube-scheduler"
	I0717 16:10:53.533116   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 16:10:53.556595   93835 logs.go:284] 0 containers: []
	W0717 16:10:53.556606   93835 logs.go:286] No container was found matching "kube-proxy"
	I0717 16:10:53.556668   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 16:10:53.577834   93835 logs.go:284] 0 containers: []
	W0717 16:10:53.577847   93835 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 16:10:53.577916   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 16:10:53.601063   93835 logs.go:284] 0 containers: []
	W0717 16:10:53.601075   93835 logs.go:286] No container was found matching "kindnet"
	I0717 16:10:53.601147   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 16:10:53.621727   93835 logs.go:284] 0 containers: []
	W0717 16:10:53.621741   93835 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 16:10:53.621748   93835 logs.go:123] Gathering logs for kubelet ...
	I0717 16:10:53.621757   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 16:10:53.664905   93835 logs.go:123] Gathering logs for dmesg ...
	I0717 16:10:53.664923   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 16:10:53.680164   93835 logs.go:123] Gathering logs for describe nodes ...
	I0717 16:10:53.680178   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 16:10:53.744960   93835 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 16:10:53.744973   93835 logs.go:123] Gathering logs for Docker ...
	I0717 16:10:53.744981   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 16:10:53.762699   93835 logs.go:123] Gathering logs for container status ...
	I0717 16:10:53.762716   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 16:10:56.325331   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:10:56.337715   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 16:10:56.357505   93835 logs.go:284] 0 containers: []
	W0717 16:10:56.357518   93835 logs.go:286] No container was found matching "kube-apiserver"
	I0717 16:10:56.357587   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 16:10:56.375675   93835 logs.go:284] 0 containers: []
	W0717 16:10:56.375687   93835 logs.go:286] No container was found matching "etcd"
	I0717 16:10:56.375757   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 16:10:56.394743   93835 logs.go:284] 0 containers: []
	W0717 16:10:56.394757   93835 logs.go:286] No container was found matching "coredns"
	I0717 16:10:56.394824   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 16:10:56.413787   93835 logs.go:284] 0 containers: []
	W0717 16:10:56.413799   93835 logs.go:286] No container was found matching "kube-scheduler"
	I0717 16:10:56.413889   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 16:10:56.434849   93835 logs.go:284] 0 containers: []
	W0717 16:10:56.434870   93835 logs.go:286] No container was found matching "kube-proxy"
	I0717 16:10:56.434930   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 16:10:56.455388   93835 logs.go:284] 0 containers: []
	W0717 16:10:56.455409   93835 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 16:10:56.455471   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 16:10:56.475473   93835 logs.go:284] 0 containers: []
	W0717 16:10:56.475486   93835 logs.go:286] No container was found matching "kindnet"
	I0717 16:10:56.475565   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 16:10:56.494828   93835 logs.go:284] 0 containers: []
	W0717 16:10:56.494842   93835 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 16:10:56.494850   93835 logs.go:123] Gathering logs for describe nodes ...
	I0717 16:10:56.494860   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 16:10:56.552578   93835 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 16:10:56.552591   93835 logs.go:123] Gathering logs for Docker ...
	I0717 16:10:56.552598   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 16:10:56.569742   93835 logs.go:123] Gathering logs for container status ...
	I0717 16:10:56.569757   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 16:10:56.621258   93835 logs.go:123] Gathering logs for kubelet ...
	I0717 16:10:56.621272   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 16:10:56.662749   93835 logs.go:123] Gathering logs for dmesg ...
	I0717 16:10:56.662766   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 16:10:59.179075   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:10:59.191143   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 16:10:59.210631   93835 logs.go:284] 0 containers: []
	W0717 16:10:59.210642   93835 logs.go:286] No container was found matching "kube-apiserver"
	I0717 16:10:59.210711   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 16:10:59.228456   93835 logs.go:284] 0 containers: []
	W0717 16:10:59.228469   93835 logs.go:286] No container was found matching "etcd"
	I0717 16:10:59.228536   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 16:10:59.247316   93835 logs.go:284] 0 containers: []
	W0717 16:10:59.247329   93835 logs.go:286] No container was found matching "coredns"
	I0717 16:10:59.247398   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 16:10:59.266948   93835 logs.go:284] 0 containers: []
	W0717 16:10:59.266962   93835 logs.go:286] No container was found matching "kube-scheduler"
	I0717 16:10:59.267031   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 16:10:59.286969   93835 logs.go:284] 0 containers: []
	W0717 16:10:59.286981   93835 logs.go:286] No container was found matching "kube-proxy"
	I0717 16:10:59.287047   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 16:10:59.306193   93835 logs.go:284] 0 containers: []
	W0717 16:10:59.306207   93835 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 16:10:59.306273   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 16:10:59.326492   93835 logs.go:284] 0 containers: []
	W0717 16:10:59.326505   93835 logs.go:286] No container was found matching "kindnet"
	I0717 16:10:59.326576   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 16:10:59.347147   93835 logs.go:284] 0 containers: []
	W0717 16:10:59.347161   93835 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 16:10:59.347168   93835 logs.go:123] Gathering logs for container status ...
	I0717 16:10:59.347175   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 16:10:59.400761   93835 logs.go:123] Gathering logs for kubelet ...
	I0717 16:10:59.400776   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 16:10:59.442358   93835 logs.go:123] Gathering logs for dmesg ...
	I0717 16:10:59.442373   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 16:10:59.457274   93835 logs.go:123] Gathering logs for describe nodes ...
	I0717 16:10:59.457289   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 16:10:59.515710   93835 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 16:10:59.515721   93835 logs.go:123] Gathering logs for Docker ...
	I0717 16:10:59.515755   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 16:11:02.032718   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:11:02.044102   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 16:11:02.064317   93835 logs.go:284] 0 containers: []
	W0717 16:11:02.064331   93835 logs.go:286] No container was found matching "kube-apiserver"
	I0717 16:11:02.064399   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 16:11:02.090663   93835 logs.go:284] 0 containers: []
	W0717 16:11:02.090677   93835 logs.go:286] No container was found matching "etcd"
	I0717 16:11:02.090745   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 16:11:02.110134   93835 logs.go:284] 0 containers: []
	W0717 16:11:02.110147   93835 logs.go:286] No container was found matching "coredns"
	I0717 16:11:02.110219   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 16:11:02.130420   93835 logs.go:284] 0 containers: []
	W0717 16:11:02.130434   93835 logs.go:286] No container was found matching "kube-scheduler"
	I0717 16:11:02.130501   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 16:11:02.150045   93835 logs.go:284] 0 containers: []
	W0717 16:11:02.150058   93835 logs.go:286] No container was found matching "kube-proxy"
	I0717 16:11:02.150127   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 16:11:02.169243   93835 logs.go:284] 0 containers: []
	W0717 16:11:02.169257   93835 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 16:11:02.169327   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 16:11:02.188241   93835 logs.go:284] 0 containers: []
	W0717 16:11:02.188255   93835 logs.go:286] No container was found matching "kindnet"
	I0717 16:11:02.188323   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 16:11:02.209048   93835 logs.go:284] 0 containers: []
	W0717 16:11:02.209060   93835 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 16:11:02.209067   93835 logs.go:123] Gathering logs for kubelet ...
	I0717 16:11:02.209074   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 16:11:02.249643   93835 logs.go:123] Gathering logs for dmesg ...
	I0717 16:11:02.249658   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 16:11:02.263819   93835 logs.go:123] Gathering logs for describe nodes ...
	I0717 16:11:02.263835   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 16:11:02.320577   93835 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 16:11:02.320589   93835 logs.go:123] Gathering logs for Docker ...
	I0717 16:11:02.320597   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 16:11:02.336999   93835 logs.go:123] Gathering logs for container status ...
	I0717 16:11:02.337013   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 16:11:04.889945   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:11:04.901352   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 16:11:04.921897   93835 logs.go:284] 0 containers: []
	W0717 16:11:04.921911   93835 logs.go:286] No container was found matching "kube-apiserver"
	I0717 16:11:04.921997   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 16:11:04.942258   93835 logs.go:284] 0 containers: []
	W0717 16:11:04.942271   93835 logs.go:286] No container was found matching "etcd"
	I0717 16:11:04.942340   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 16:11:04.962001   93835 logs.go:284] 0 containers: []
	W0717 16:11:04.962015   93835 logs.go:286] No container was found matching "coredns"
	I0717 16:11:04.962097   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 16:11:04.982082   93835 logs.go:284] 0 containers: []
	W0717 16:11:04.982096   93835 logs.go:286] No container was found matching "kube-scheduler"
	I0717 16:11:04.982169   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 16:11:05.003819   93835 logs.go:284] 0 containers: []
	W0717 16:11:05.003834   93835 logs.go:286] No container was found matching "kube-proxy"
	I0717 16:11:05.003903   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 16:11:05.025602   93835 logs.go:284] 0 containers: []
	W0717 16:11:05.025619   93835 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 16:11:05.025699   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 16:11:05.046687   93835 logs.go:284] 0 containers: []
	W0717 16:11:05.046704   93835 logs.go:286] No container was found matching "kindnet"
	I0717 16:11:05.046787   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 16:11:05.098791   93835 logs.go:284] 0 containers: []
	W0717 16:11:05.098805   93835 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 16:11:05.098812   93835 logs.go:123] Gathering logs for container status ...
	I0717 16:11:05.098820   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 16:11:05.153430   93835 logs.go:123] Gathering logs for kubelet ...
	I0717 16:11:05.153446   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 16:11:05.197431   93835 logs.go:123] Gathering logs for dmesg ...
	I0717 16:11:05.197452   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 16:11:05.212791   93835 logs.go:123] Gathering logs for describe nodes ...
	I0717 16:11:05.212808   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 16:11:05.278635   93835 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 16:11:05.278662   93835 logs.go:123] Gathering logs for Docker ...
	I0717 16:11:05.278685   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 16:11:07.796037   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:11:07.808044   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 16:11:07.836822   93835 logs.go:284] 0 containers: []
	W0717 16:11:07.836841   93835 logs.go:286] No container was found matching "kube-apiserver"
	I0717 16:11:07.836939   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 16:11:07.857875   93835 logs.go:284] 0 containers: []
	W0717 16:11:07.857890   93835 logs.go:286] No container was found matching "etcd"
	I0717 16:11:07.857958   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 16:11:07.878096   93835 logs.go:284] 0 containers: []
	W0717 16:11:07.878109   93835 logs.go:286] No container was found matching "coredns"
	I0717 16:11:07.878174   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 16:11:07.897490   93835 logs.go:284] 0 containers: []
	W0717 16:11:07.897509   93835 logs.go:286] No container was found matching "kube-scheduler"
	I0717 16:11:07.897595   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 16:11:07.919044   93835 logs.go:284] 0 containers: []
	W0717 16:11:07.919058   93835 logs.go:286] No container was found matching "kube-proxy"
	I0717 16:11:07.919157   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 16:11:07.942026   93835 logs.go:284] 0 containers: []
	W0717 16:11:07.942044   93835 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 16:11:07.942122   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 16:11:07.966706   93835 logs.go:284] 0 containers: []
	W0717 16:11:07.966736   93835 logs.go:286] No container was found matching "kindnet"
	I0717 16:11:07.966820   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 16:11:07.987555   93835 logs.go:284] 0 containers: []
	W0717 16:11:07.987568   93835 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 16:11:07.987575   93835 logs.go:123] Gathering logs for container status ...
	I0717 16:11:07.987583   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 16:11:08.055799   93835 logs.go:123] Gathering logs for kubelet ...
	I0717 16:11:08.055814   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 16:11:08.120776   93835 logs.go:123] Gathering logs for dmesg ...
	I0717 16:11:08.120799   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 16:11:08.137139   93835 logs.go:123] Gathering logs for describe nodes ...
	I0717 16:11:08.137156   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 16:11:08.203696   93835 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 16:11:08.203709   93835 logs.go:123] Gathering logs for Docker ...
	I0717 16:11:08.203717   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 16:11:10.722294   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:11:10.737105   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 16:11:10.757262   93835 logs.go:284] 0 containers: []
	W0717 16:11:10.757277   93835 logs.go:286] No container was found matching "kube-apiserver"
	I0717 16:11:10.757346   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 16:11:10.776587   93835 logs.go:284] 0 containers: []
	W0717 16:11:10.776600   93835 logs.go:286] No container was found matching "etcd"
	I0717 16:11:10.776667   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 16:11:10.797364   93835 logs.go:284] 0 containers: []
	W0717 16:11:10.797384   93835 logs.go:286] No container was found matching "coredns"
	I0717 16:11:10.797484   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 16:11:10.820241   93835 logs.go:284] 0 containers: []
	W0717 16:11:10.820260   93835 logs.go:286] No container was found matching "kube-scheduler"
	I0717 16:11:10.820369   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 16:11:10.845565   93835 logs.go:284] 0 containers: []
	W0717 16:11:10.845579   93835 logs.go:286] No container was found matching "kube-proxy"
	I0717 16:11:10.845652   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 16:11:10.865746   93835 logs.go:284] 0 containers: []
	W0717 16:11:10.865759   93835 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 16:11:10.865850   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 16:11:10.884879   93835 logs.go:284] 0 containers: []
	W0717 16:11:10.884894   93835 logs.go:286] No container was found matching "kindnet"
	I0717 16:11:10.884960   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 16:11:10.904361   93835 logs.go:284] 0 containers: []
	W0717 16:11:10.904375   93835 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 16:11:10.904382   93835 logs.go:123] Gathering logs for Docker ...
	I0717 16:11:10.904391   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 16:11:10.924221   93835 logs.go:123] Gathering logs for container status ...
	I0717 16:11:10.924240   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 16:11:10.982436   93835 logs.go:123] Gathering logs for kubelet ...
	I0717 16:11:10.982450   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 16:11:11.029474   93835 logs.go:123] Gathering logs for dmesg ...
	I0717 16:11:11.029495   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 16:11:11.046370   93835 logs.go:123] Gathering logs for describe nodes ...
	I0717 16:11:11.046392   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 16:11:11.134328   93835 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 16:11:13.635957   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:11:13.648062   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 16:11:13.667230   93835 logs.go:284] 0 containers: []
	W0717 16:11:13.667243   93835 logs.go:286] No container was found matching "kube-apiserver"
	I0717 16:11:13.667310   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 16:11:13.687993   93835 logs.go:284] 0 containers: []
	W0717 16:11:13.688004   93835 logs.go:286] No container was found matching "etcd"
	I0717 16:11:13.688078   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 16:11:13.707557   93835 logs.go:284] 0 containers: []
	W0717 16:11:13.707569   93835 logs.go:286] No container was found matching "coredns"
	I0717 16:11:13.707635   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 16:11:13.725850   93835 logs.go:284] 0 containers: []
	W0717 16:11:13.725866   93835 logs.go:286] No container was found matching "kube-scheduler"
	I0717 16:11:13.725936   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 16:11:13.746800   93835 logs.go:284] 0 containers: []
	W0717 16:11:13.746814   93835 logs.go:286] No container was found matching "kube-proxy"
	I0717 16:11:13.746884   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 16:11:13.766155   93835 logs.go:284] 0 containers: []
	W0717 16:11:13.766170   93835 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 16:11:13.766239   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 16:11:13.785520   93835 logs.go:284] 0 containers: []
	W0717 16:11:13.785533   93835 logs.go:286] No container was found matching "kindnet"
	I0717 16:11:13.785602   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 16:11:13.804423   93835 logs.go:284] 0 containers: []
	W0717 16:11:13.804437   93835 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 16:11:13.804444   93835 logs.go:123] Gathering logs for kubelet ...
	I0717 16:11:13.804451   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 16:11:13.846303   93835 logs.go:123] Gathering logs for dmesg ...
	I0717 16:11:13.846320   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 16:11:13.860235   93835 logs.go:123] Gathering logs for describe nodes ...
	I0717 16:11:13.860251   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 16:11:13.919154   93835 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 16:11:13.919168   93835 logs.go:123] Gathering logs for Docker ...
	I0717 16:11:13.919176   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 16:11:13.935475   93835 logs.go:123] Gathering logs for container status ...
	I0717 16:11:13.935489   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 16:11:16.489224   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:11:16.502116   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 16:11:16.521668   93835 logs.go:284] 0 containers: []
	W0717 16:11:16.521681   93835 logs.go:286] No container was found matching "kube-apiserver"
	I0717 16:11:16.521749   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 16:11:16.541973   93835 logs.go:284] 0 containers: []
	W0717 16:11:16.541986   93835 logs.go:286] No container was found matching "etcd"
	I0717 16:11:16.542050   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 16:11:16.561343   93835 logs.go:284] 0 containers: []
	W0717 16:11:16.561355   93835 logs.go:286] No container was found matching "coredns"
	I0717 16:11:16.561422   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 16:11:16.580112   93835 logs.go:284] 0 containers: []
	W0717 16:11:16.580126   93835 logs.go:286] No container was found matching "kube-scheduler"
	I0717 16:11:16.580193   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 16:11:16.599380   93835 logs.go:284] 0 containers: []
	W0717 16:11:16.599393   93835 logs.go:286] No container was found matching "kube-proxy"
	I0717 16:11:16.599465   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 16:11:16.619800   93835 logs.go:284] 0 containers: []
	W0717 16:11:16.619813   93835 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 16:11:16.619883   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 16:11:16.638556   93835 logs.go:284] 0 containers: []
	W0717 16:11:16.638570   93835 logs.go:286] No container was found matching "kindnet"
	I0717 16:11:16.638638   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 16:11:16.662372   93835 logs.go:284] 0 containers: []
	W0717 16:11:16.662388   93835 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 16:11:16.662397   93835 logs.go:123] Gathering logs for container status ...
	I0717 16:11:16.662405   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 16:11:16.717464   93835 logs.go:123] Gathering logs for kubelet ...
	I0717 16:11:16.717481   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 16:11:16.759570   93835 logs.go:123] Gathering logs for dmesg ...
	I0717 16:11:16.759585   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 16:11:16.773696   93835 logs.go:123] Gathering logs for describe nodes ...
	I0717 16:11:16.773712   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 16:11:16.832301   93835 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 16:11:16.832315   93835 logs.go:123] Gathering logs for Docker ...
	I0717 16:11:16.832322   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 16:11:19.349387   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:11:19.360398   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 16:11:19.379799   93835 logs.go:284] 0 containers: []
	W0717 16:11:19.379813   93835 logs.go:286] No container was found matching "kube-apiserver"
	I0717 16:11:19.379881   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 16:11:19.400137   93835 logs.go:284] 0 containers: []
	W0717 16:11:19.400150   93835 logs.go:286] No container was found matching "etcd"
	I0717 16:11:19.400221   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 16:11:19.420120   93835 logs.go:284] 0 containers: []
	W0717 16:11:19.420135   93835 logs.go:286] No container was found matching "coredns"
	I0717 16:11:19.420202   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 16:11:19.440652   93835 logs.go:284] 0 containers: []
	W0717 16:11:19.440666   93835 logs.go:286] No container was found matching "kube-scheduler"
	I0717 16:11:19.440739   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 16:11:19.461061   93835 logs.go:284] 0 containers: []
	W0717 16:11:19.461076   93835 logs.go:286] No container was found matching "kube-proxy"
	I0717 16:11:19.461170   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 16:11:19.479965   93835 logs.go:284] 0 containers: []
	W0717 16:11:19.479979   93835 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 16:11:19.480046   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 16:11:19.500592   93835 logs.go:284] 0 containers: []
	W0717 16:11:19.500605   93835 logs.go:286] No container was found matching "kindnet"
	I0717 16:11:19.500672   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 16:11:19.520704   93835 logs.go:284] 0 containers: []
	W0717 16:11:19.520717   93835 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 16:11:19.520725   93835 logs.go:123] Gathering logs for dmesg ...
	I0717 16:11:19.520732   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 16:11:19.534116   93835 logs.go:123] Gathering logs for describe nodes ...
	I0717 16:11:19.534133   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 16:11:19.594934   93835 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 16:11:19.594948   93835 logs.go:123] Gathering logs for Docker ...
	I0717 16:11:19.594956   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 16:11:19.611757   93835 logs.go:123] Gathering logs for container status ...
	I0717 16:11:19.611770   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 16:11:19.666218   93835 logs.go:123] Gathering logs for kubelet ...
	I0717 16:11:19.666233   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 16:11:22.213692   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:11:22.226048   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 16:11:22.245727   93835 logs.go:284] 0 containers: []
	W0717 16:11:22.245742   93835 logs.go:286] No container was found matching "kube-apiserver"
	I0717 16:11:22.245811   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 16:11:22.266157   93835 logs.go:284] 0 containers: []
	W0717 16:11:22.266172   93835 logs.go:286] No container was found matching "etcd"
	I0717 16:11:22.266239   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 16:11:22.284979   93835 logs.go:284] 0 containers: []
	W0717 16:11:22.284995   93835 logs.go:286] No container was found matching "coredns"
	I0717 16:11:22.285068   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 16:11:22.306538   93835 logs.go:284] 0 containers: []
	W0717 16:11:22.306555   93835 logs.go:286] No container was found matching "kube-scheduler"
	I0717 16:11:22.306628   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 16:11:22.325484   93835 logs.go:284] 0 containers: []
	W0717 16:11:22.325498   93835 logs.go:286] No container was found matching "kube-proxy"
	I0717 16:11:22.325568   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 16:11:22.345799   93835 logs.go:284] 0 containers: []
	W0717 16:11:22.345813   93835 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 16:11:22.345883   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 16:11:22.365977   93835 logs.go:284] 0 containers: []
	W0717 16:11:22.365991   93835 logs.go:286] No container was found matching "kindnet"
	I0717 16:11:22.366060   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 16:11:22.397379   93835 logs.go:284] 0 containers: []
	W0717 16:11:22.397394   93835 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 16:11:22.397401   93835 logs.go:123] Gathering logs for container status ...
	I0717 16:11:22.397408   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 16:11:22.450063   93835 logs.go:123] Gathering logs for kubelet ...
	I0717 16:11:22.450077   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 16:11:22.490827   93835 logs.go:123] Gathering logs for dmesg ...
	I0717 16:11:22.490840   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 16:11:22.504655   93835 logs.go:123] Gathering logs for describe nodes ...
	I0717 16:11:22.504669   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 16:11:22.562404   93835 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 16:11:22.562415   93835 logs.go:123] Gathering logs for Docker ...
	I0717 16:11:22.562425   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 16:11:25.079939   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:11:25.092345   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 16:11:25.111113   93835 logs.go:284] 0 containers: []
	W0717 16:11:25.111126   93835 logs.go:286] No container was found matching "kube-apiserver"
	I0717 16:11:25.111203   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 16:11:25.129832   93835 logs.go:284] 0 containers: []
	W0717 16:11:25.129844   93835 logs.go:286] No container was found matching "etcd"
	I0717 16:11:25.129911   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 16:11:25.149703   93835 logs.go:284] 0 containers: []
	W0717 16:11:25.149717   93835 logs.go:286] No container was found matching "coredns"
	I0717 16:11:25.149789   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 16:11:25.169578   93835 logs.go:284] 0 containers: []
	W0717 16:11:25.169590   93835 logs.go:286] No container was found matching "kube-scheduler"
	I0717 16:11:25.169650   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 16:11:25.189904   93835 logs.go:284] 0 containers: []
	W0717 16:11:25.189915   93835 logs.go:286] No container was found matching "kube-proxy"
	I0717 16:11:25.189982   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 16:11:25.209990   93835 logs.go:284] 0 containers: []
	W0717 16:11:25.210003   93835 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 16:11:25.210072   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 16:11:25.229691   93835 logs.go:284] 0 containers: []
	W0717 16:11:25.229704   93835 logs.go:286] No container was found matching "kindnet"
	I0717 16:11:25.229767   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 16:11:25.250261   93835 logs.go:284] 0 containers: []
	W0717 16:11:25.250274   93835 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 16:11:25.250282   93835 logs.go:123] Gathering logs for kubelet ...
	I0717 16:11:25.250293   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 16:11:25.293962   93835 logs.go:123] Gathering logs for dmesg ...
	I0717 16:11:25.293981   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 16:11:25.310454   93835 logs.go:123] Gathering logs for describe nodes ...
	I0717 16:11:25.310470   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 16:11:25.371428   93835 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 16:11:25.371440   93835 logs.go:123] Gathering logs for Docker ...
	I0717 16:11:25.371447   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 16:11:25.387484   93835 logs.go:123] Gathering logs for container status ...
	I0717 16:11:25.387497   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 16:11:27.939780   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:11:27.952060   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 16:11:27.971854   93835 logs.go:284] 0 containers: []
	W0717 16:11:27.971867   93835 logs.go:286] No container was found matching "kube-apiserver"
	I0717 16:11:27.971939   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 16:11:27.992234   93835 logs.go:284] 0 containers: []
	W0717 16:11:27.992248   93835 logs.go:286] No container was found matching "etcd"
	I0717 16:11:27.992317   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 16:11:28.011020   93835 logs.go:284] 0 containers: []
	W0717 16:11:28.011032   93835 logs.go:286] No container was found matching "coredns"
	I0717 16:11:28.011099   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 16:11:28.032885   93835 logs.go:284] 0 containers: []
	W0717 16:11:28.032897   93835 logs.go:286] No container was found matching "kube-scheduler"
	I0717 16:11:28.032965   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 16:11:28.053628   93835 logs.go:284] 0 containers: []
	W0717 16:11:28.053643   93835 logs.go:286] No container was found matching "kube-proxy"
	I0717 16:11:28.053714   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 16:11:28.072098   93835 logs.go:284] 0 containers: []
	W0717 16:11:28.072111   93835 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 16:11:28.072186   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 16:11:28.091790   93835 logs.go:284] 0 containers: []
	W0717 16:11:28.091804   93835 logs.go:286] No container was found matching "kindnet"
	I0717 16:11:28.091872   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 16:11:28.110699   93835 logs.go:284] 0 containers: []
	W0717 16:11:28.110712   93835 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 16:11:28.110720   93835 logs.go:123] Gathering logs for kubelet ...
	I0717 16:11:28.110727   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 16:11:28.151524   93835 logs.go:123] Gathering logs for dmesg ...
	I0717 16:11:28.151540   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 16:11:28.166240   93835 logs.go:123] Gathering logs for describe nodes ...
	I0717 16:11:28.166255   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 16:11:28.224810   93835 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 16:11:28.224823   93835 logs.go:123] Gathering logs for Docker ...
	I0717 16:11:28.224830   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 16:11:28.241412   93835 logs.go:123] Gathering logs for container status ...
	I0717 16:11:28.241426   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 16:11:30.800962   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:11:30.813385   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 16:11:30.833309   93835 logs.go:284] 0 containers: []
	W0717 16:11:30.833323   93835 logs.go:286] No container was found matching "kube-apiserver"
	I0717 16:11:30.833390   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 16:11:30.852282   93835 logs.go:284] 0 containers: []
	W0717 16:11:30.852297   93835 logs.go:286] No container was found matching "etcd"
	I0717 16:11:30.852365   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 16:11:30.870815   93835 logs.go:284] 0 containers: []
	W0717 16:11:30.870827   93835 logs.go:286] No container was found matching "coredns"
	I0717 16:11:30.870893   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 16:11:30.889400   93835 logs.go:284] 0 containers: []
	W0717 16:11:30.889414   93835 logs.go:286] No container was found matching "kube-scheduler"
	I0717 16:11:30.889481   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 16:11:30.909299   93835 logs.go:284] 0 containers: []
	W0717 16:11:30.909328   93835 logs.go:286] No container was found matching "kube-proxy"
	I0717 16:11:30.909393   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 16:11:30.928719   93835 logs.go:284] 0 containers: []
	W0717 16:11:30.928733   93835 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 16:11:30.928802   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 16:11:30.949012   93835 logs.go:284] 0 containers: []
	W0717 16:11:30.949025   93835 logs.go:286] No container was found matching "kindnet"
	I0717 16:11:30.949093   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 16:11:30.967761   93835 logs.go:284] 0 containers: []
	W0717 16:11:30.967774   93835 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 16:11:30.967780   93835 logs.go:123] Gathering logs for Docker ...
	I0717 16:11:30.967787   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 16:11:30.984715   93835 logs.go:123] Gathering logs for container status ...
	I0717 16:11:30.984729   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 16:11:31.035522   93835 logs.go:123] Gathering logs for kubelet ...
	I0717 16:11:31.035538   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 16:11:31.078460   93835 logs.go:123] Gathering logs for dmesg ...
	I0717 16:11:31.078474   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 16:11:31.092562   93835 logs.go:123] Gathering logs for describe nodes ...
	I0717 16:11:31.092597   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 16:11:31.149881   93835 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 16:11:33.651898   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:11:33.664361   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 16:11:33.684241   93835 logs.go:284] 0 containers: []
	W0717 16:11:33.684259   93835 logs.go:286] No container was found matching "kube-apiserver"
	I0717 16:11:33.684330   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 16:11:33.703735   93835 logs.go:284] 0 containers: []
	W0717 16:11:33.703748   93835 logs.go:286] No container was found matching "etcd"
	I0717 16:11:33.703819   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 16:11:33.724109   93835 logs.go:284] 0 containers: []
	W0717 16:11:33.724123   93835 logs.go:286] No container was found matching "coredns"
	I0717 16:11:33.724189   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 16:11:33.743630   93835 logs.go:284] 0 containers: []
	W0717 16:11:33.743645   93835 logs.go:286] No container was found matching "kube-scheduler"
	I0717 16:11:33.743714   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 16:11:33.763041   93835 logs.go:284] 0 containers: []
	W0717 16:11:33.763054   93835 logs.go:286] No container was found matching "kube-proxy"
	I0717 16:11:33.763123   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 16:11:33.781213   93835 logs.go:284] 0 containers: []
	W0717 16:11:33.781227   93835 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 16:11:33.781304   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 16:11:33.800470   93835 logs.go:284] 0 containers: []
	W0717 16:11:33.800483   93835 logs.go:286] No container was found matching "kindnet"
	I0717 16:11:33.800549   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 16:11:33.820674   93835 logs.go:284] 0 containers: []
	W0717 16:11:33.820689   93835 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 16:11:33.820695   93835 logs.go:123] Gathering logs for kubelet ...
	I0717 16:11:33.820704   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 16:11:33.862316   93835 logs.go:123] Gathering logs for dmesg ...
	I0717 16:11:33.862331   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 16:11:33.876505   93835 logs.go:123] Gathering logs for describe nodes ...
	I0717 16:11:33.876520   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 16:11:33.934552   93835 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 16:11:33.934564   93835 logs.go:123] Gathering logs for Docker ...
	I0717 16:11:33.934570   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 16:11:33.950959   93835 logs.go:123] Gathering logs for container status ...
	I0717 16:11:33.950971   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 16:11:36.503291   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:11:36.514325   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 16:11:36.535202   93835 logs.go:284] 0 containers: []
	W0717 16:11:36.535215   93835 logs.go:286] No container was found matching "kube-apiserver"
	I0717 16:11:36.535286   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 16:11:36.556627   93835 logs.go:284] 0 containers: []
	W0717 16:11:36.556641   93835 logs.go:286] No container was found matching "etcd"
	I0717 16:11:36.556714   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 16:11:36.599015   93835 logs.go:284] 0 containers: []
	W0717 16:11:36.599028   93835 logs.go:286] No container was found matching "coredns"
	I0717 16:11:36.599096   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 16:11:36.619003   93835 logs.go:284] 0 containers: []
	W0717 16:11:36.619017   93835 logs.go:286] No container was found matching "kube-scheduler"
	I0717 16:11:36.619092   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 16:11:36.638681   93835 logs.go:284] 0 containers: []
	W0717 16:11:36.638694   93835 logs.go:286] No container was found matching "kube-proxy"
	I0717 16:11:36.638763   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 16:11:36.658586   93835 logs.go:284] 0 containers: []
	W0717 16:11:36.658598   93835 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 16:11:36.658664   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 16:11:36.677914   93835 logs.go:284] 0 containers: []
	W0717 16:11:36.677926   93835 logs.go:286] No container was found matching "kindnet"
	I0717 16:11:36.677992   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 16:11:36.697802   93835 logs.go:284] 0 containers: []
	W0717 16:11:36.697816   93835 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 16:11:36.697823   93835 logs.go:123] Gathering logs for container status ...
	I0717 16:11:36.697833   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 16:11:36.749848   93835 logs.go:123] Gathering logs for kubelet ...
	I0717 16:11:36.749863   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 16:11:36.789680   93835 logs.go:123] Gathering logs for dmesg ...
	I0717 16:11:36.789694   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 16:11:36.804203   93835 logs.go:123] Gathering logs for describe nodes ...
	I0717 16:11:36.804217   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 16:11:36.861013   93835 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 16:11:36.861024   93835 logs.go:123] Gathering logs for Docker ...
	I0717 16:11:36.861031   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 16:11:39.378318   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:11:39.390982   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 16:11:39.410146   93835 logs.go:284] 0 containers: []
	W0717 16:11:39.410159   93835 logs.go:286] No container was found matching "kube-apiserver"
	I0717 16:11:39.410226   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 16:11:39.429627   93835 logs.go:284] 0 containers: []
	W0717 16:11:39.429640   93835 logs.go:286] No container was found matching "etcd"
	I0717 16:11:39.429708   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 16:11:39.448992   93835 logs.go:284] 0 containers: []
	W0717 16:11:39.449005   93835 logs.go:286] No container was found matching "coredns"
	I0717 16:11:39.449074   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 16:11:39.467752   93835 logs.go:284] 0 containers: []
	W0717 16:11:39.467765   93835 logs.go:286] No container was found matching "kube-scheduler"
	I0717 16:11:39.467832   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 16:11:39.487931   93835 logs.go:284] 0 containers: []
	W0717 16:11:39.487944   93835 logs.go:286] No container was found matching "kube-proxy"
	I0717 16:11:39.488010   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 16:11:39.509205   93835 logs.go:284] 0 containers: []
	W0717 16:11:39.509220   93835 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 16:11:39.509290   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 16:11:39.530820   93835 logs.go:284] 0 containers: []
	W0717 16:11:39.530835   93835 logs.go:286] No container was found matching "kindnet"
	I0717 16:11:39.530909   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 16:11:39.551623   93835 logs.go:284] 0 containers: []
	W0717 16:11:39.551641   93835 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 16:11:39.551648   93835 logs.go:123] Gathering logs for kubelet ...
	I0717 16:11:39.551659   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 16:11:39.617986   93835 logs.go:123] Gathering logs for dmesg ...
	I0717 16:11:39.618006   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 16:11:39.632860   93835 logs.go:123] Gathering logs for describe nodes ...
	I0717 16:11:39.632894   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 16:11:39.692796   93835 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 16:11:39.692808   93835 logs.go:123] Gathering logs for Docker ...
	I0717 16:11:39.692815   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 16:11:39.709142   93835 logs.go:123] Gathering logs for container status ...
	I0717 16:11:39.709155   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 16:11:42.264100   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:11:42.276461   93835 kubeadm.go:640] restartCluster took 4m13.111495627s
	W0717 16:11:42.276502   93835 out.go:239] ! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	I0717 16:11:42.276522   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0717 16:11:42.690572   93835 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 16:11:42.701697   93835 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 16:11:42.710816   93835 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0717 16:11:42.710870   93835 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 16:11:42.719735   93835 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 16:11:42.719764   93835 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0717 16:11:42.769533   93835 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0717 16:11:42.769653   93835 kubeadm.go:322] [preflight] Running pre-flight checks
	I0717 16:11:43.030582   93835 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 16:11:43.030745   93835 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 16:11:43.030836   93835 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 16:11:43.210961   93835 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 16:11:43.211663   93835 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 16:11:43.218510   93835 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0717 16:11:43.288047   93835 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 16:11:43.309713   93835 out.go:204]   - Generating certificates and keys ...
	I0717 16:11:43.309791   93835 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0717 16:11:43.309860   93835 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0717 16:11:43.309929   93835 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0717 16:11:43.309982   93835 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0717 16:11:43.310059   93835 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0717 16:11:43.310123   93835 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0717 16:11:43.310232   93835 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0717 16:11:43.310287   93835 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0717 16:11:43.310343   93835 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0717 16:11:43.310482   93835 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0717 16:11:43.310527   93835 kubeadm.go:322] [certs] Using the existing "sa" key
	I0717 16:11:43.310596   93835 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 16:11:43.502662   93835 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 16:11:43.694360   93835 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 16:11:43.844413   93835 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 16:11:43.952113   93835 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 16:11:43.952913   93835 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 16:11:43.974845   93835 out.go:204]   - Booting up control plane ...
	I0717 16:11:43.975085   93835 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 16:11:43.975246   93835 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 16:11:43.975363   93835 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 16:11:43.975502   93835 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 16:11:43.975754   93835 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0717 16:12:23.963686   93835 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0717 16:12:23.964879   93835 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 16:12:23.965093   93835 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 16:12:28.966848   93835 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 16:12:28.967069   93835 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 16:12:38.968951   93835 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 16:12:38.969217   93835 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 16:12:58.971415   93835 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 16:12:58.971613   93835 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 16:13:38.974169   93835 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 16:13:38.974394   93835 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 16:13:38.974409   93835 kubeadm.go:322] 
	I0717 16:13:38.974446   93835 kubeadm.go:322] Unfortunately, an error has occurred:
	I0717 16:13:38.974524   93835 kubeadm.go:322] 	timed out waiting for the condition
	I0717 16:13:38.974533   93835 kubeadm.go:322] 
	I0717 16:13:38.974597   93835 kubeadm.go:322] This error is likely caused by:
	I0717 16:13:38.974636   93835 kubeadm.go:322] 	- The kubelet is not running
	I0717 16:13:38.974762   93835 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0717 16:13:38.974776   93835 kubeadm.go:322] 
	I0717 16:13:38.974939   93835 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0717 16:13:38.974980   93835 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0717 16:13:38.975024   93835 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0717 16:13:38.975029   93835 kubeadm.go:322] 
	I0717 16:13:38.975152   93835 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0717 16:13:38.975267   93835 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0717 16:13:38.975370   93835 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0717 16:13:38.975435   93835 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0717 16:13:38.975498   93835 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0717 16:13:38.975527   93835 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0717 16:13:38.976853   93835 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0717 16:13:38.976914   93835 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0717 16:13:38.977018   93835 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.4. Latest validated version: 18.09
	I0717 16:13:38.977096   93835 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 16:13:38.977165   93835 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0717 16:13:38.977221   93835 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	W0717 16:13:38.977296   93835 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.4. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.4. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0717 16:13:38.977331   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0717 16:13:39.389138   93835 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 16:13:39.400445   93835 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0717 16:13:39.400507   93835 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 16:13:39.409262   93835 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 16:13:39.409282   93835 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0717 16:13:39.459947   93835 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0717 16:13:39.459994   93835 kubeadm.go:322] [preflight] Running pre-flight checks
	I0717 16:13:39.731126   93835 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 16:13:39.731279   93835 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 16:13:39.731447   93835 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 16:13:39.913619   93835 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 16:13:39.914356   93835 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 16:13:39.921205   93835 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0717 16:13:39.998119   93835 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 16:13:40.019725   93835 out.go:204]   - Generating certificates and keys ...
	I0717 16:13:40.019790   93835 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0717 16:13:40.019869   93835 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0717 16:13:40.019960   93835 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0717 16:13:40.020011   93835 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0717 16:13:40.020077   93835 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0717 16:13:40.020122   93835 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0717 16:13:40.020178   93835 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0717 16:13:40.020250   93835 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0717 16:13:40.020341   93835 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0717 16:13:40.020435   93835 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0717 16:13:40.020476   93835 kubeadm.go:322] [certs] Using the existing "sa" key
	I0717 16:13:40.020520   93835 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 16:13:40.095710   93835 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 16:13:40.210159   93835 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 16:13:40.415683   93835 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 16:13:40.529613   93835 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 16:13:40.530275   93835 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 16:13:40.551861   93835 out.go:204]   - Booting up control plane ...
	I0717 16:13:40.552029   93835 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 16:13:40.552248   93835 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 16:13:40.552358   93835 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 16:13:40.552516   93835 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 16:13:40.552838   93835 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0717 16:14:20.540527   93835 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0717 16:14:20.541233   93835 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 16:14:20.541448   93835 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 16:14:25.543077   93835 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 16:14:25.543306   93835 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 16:14:35.544093   93835 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 16:14:35.544297   93835 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 16:14:55.546072   93835 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 16:14:55.546321   93835 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 16:15:35.547474   93835 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 16:15:35.547687   93835 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 16:15:35.547702   93835 kubeadm.go:322] 
	I0717 16:15:35.547758   93835 kubeadm.go:322] Unfortunately, an error has occurred:
	I0717 16:15:35.547811   93835 kubeadm.go:322] 	timed out waiting for the condition
	I0717 16:15:35.547818   93835 kubeadm.go:322] 
	I0717 16:15:35.547889   93835 kubeadm.go:322] This error is likely caused by:
	I0717 16:15:35.547934   93835 kubeadm.go:322] 	- The kubelet is not running
	I0717 16:15:35.548051   93835 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0717 16:15:35.548058   93835 kubeadm.go:322] 
	I0717 16:15:35.548228   93835 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0717 16:15:35.548277   93835 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0717 16:15:35.548335   93835 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0717 16:15:35.548352   93835 kubeadm.go:322] 
	I0717 16:15:35.548489   93835 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0717 16:15:35.548591   93835 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0717 16:15:35.548686   93835 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0717 16:15:35.548726   93835 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0717 16:15:35.548788   93835 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0717 16:15:35.548820   93835 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0717 16:15:35.550483   93835 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0717 16:15:35.550555   93835 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0717 16:15:35.550662   93835 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.4. Latest validated version: 18.09
	I0717 16:15:35.550753   93835 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 16:15:35.550826   93835 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0717 16:15:35.550883   93835 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0717 16:15:35.550914   93835 kubeadm.go:406] StartCluster complete in 8m6.413956871s
	I0717 16:15:35.551001   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 16:15:35.574903   93835 logs.go:284] 0 containers: []
	W0717 16:15:35.574915   93835 logs.go:286] No container was found matching "kube-apiserver"
	I0717 16:15:35.575006   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 16:15:35.595318   93835 logs.go:284] 0 containers: []
	W0717 16:15:35.595334   93835 logs.go:286] No container was found matching "etcd"
	I0717 16:15:35.595398   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 16:15:35.614450   93835 logs.go:284] 0 containers: []
	W0717 16:15:35.614463   93835 logs.go:286] No container was found matching "coredns"
	I0717 16:15:35.614535   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 16:15:35.633854   93835 logs.go:284] 0 containers: []
	W0717 16:15:35.633868   93835 logs.go:286] No container was found matching "kube-scheduler"
	I0717 16:15:35.633941   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 16:15:35.654538   93835 logs.go:284] 0 containers: []
	W0717 16:15:35.654550   93835 logs.go:286] No container was found matching "kube-proxy"
	I0717 16:15:35.654616   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 16:15:35.673690   93835 logs.go:284] 0 containers: []
	W0717 16:15:35.673704   93835 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 16:15:35.673768   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 16:15:35.692920   93835 logs.go:284] 0 containers: []
	W0717 16:15:35.692934   93835 logs.go:286] No container was found matching "kindnet"
	I0717 16:15:35.693003   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 16:15:35.712944   93835 logs.go:284] 0 containers: []
	W0717 16:15:35.712964   93835 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 16:15:35.712974   93835 logs.go:123] Gathering logs for kubelet ...
	I0717 16:15:35.712988   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 16:15:35.753004   93835 logs.go:123] Gathering logs for dmesg ...
	I0717 16:15:35.753023   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 16:15:35.768094   93835 logs.go:123] Gathering logs for describe nodes ...
	I0717 16:15:35.768112   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 16:15:35.829397   93835 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 16:15:35.829409   93835 logs.go:123] Gathering logs for Docker ...
	I0717 16:15:35.829416   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 16:15:35.846538   93835 logs.go:123] Gathering logs for container status ...
	I0717 16:15:35.846553   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0717 16:15:35.901689   93835 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.4. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0717 16:15:35.901715   93835 out.go:239] * 
	* 
	W0717 16:15:35.901777   93835 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.4. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.4. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0717 16:15:35.901816   93835 out.go:239] * 
	* 
	W0717 16:15:35.902485   93835 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 16:15:35.965205   93835 out.go:177] 
	W0717 16:15:36.007342   93835 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.4. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.4. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0717 16:15:36.007407   93835 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0717 16:15:36.007431   93835 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0717 16:15:36.070330   93835 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-amd64 start -p old-k8s-version-770000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-770000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-770000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6129a8b881bae9c9b14658d603684185fa98d048ad62c6ede03346a49e6e2b56",
	        "Created": "2023-07-17T23:01:29.298658175Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1241282,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-07-17T23:07:11.601689472Z",
	            "FinishedAt": "2023-07-17T23:07:08.838461805Z"
	        },
	        "Image": "sha256:c6cc01e6091959400f260dc442708e7c71630b58dab1f7c344cb00926bd84950",
	        "ResolvConfPath": "/var/lib/docker/containers/6129a8b881bae9c9b14658d603684185fa98d048ad62c6ede03346a49e6e2b56/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6129a8b881bae9c9b14658d603684185fa98d048ad62c6ede03346a49e6e2b56/hostname",
	        "HostsPath": "/var/lib/docker/containers/6129a8b881bae9c9b14658d603684185fa98d048ad62c6ede03346a49e6e2b56/hosts",
	        "LogPath": "/var/lib/docker/containers/6129a8b881bae9c9b14658d603684185fa98d048ad62c6ede03346a49e6e2b56/6129a8b881bae9c9b14658d603684185fa98d048ad62c6ede03346a49e6e2b56-json.log",
	        "Name": "/old-k8s-version-770000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "old-k8s-version-770000:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-770000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/a038e5269af2118ee927b0485208b6f3b1d1f1a742907462c43ed3f30ca09e24-init/diff:/var/lib/docker/overlay2/388817d1807139a2b5fe2987f16fc65d58f6720a0b0343097a59eb837a278a0e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a038e5269af2118ee927b0485208b6f3b1d1f1a742907462c43ed3f30ca09e24/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a038e5269af2118ee927b0485208b6f3b1d1f1a742907462c43ed3f30ca09e24/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a038e5269af2118ee927b0485208b6f3b1d1f1a742907462c43ed3f30ca09e24/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-770000",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-770000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-770000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-770000",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-770000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "07494e311db80b6eb897208f0309b8eb9434435b8000ecbc8c45045c67b478ce",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57348"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57349"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57350"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57351"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57352"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/07494e311db8",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-770000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "6129a8b881ba",
	                        "old-k8s-version-770000"
	                    ],
	                    "NetworkID": "e0b81b03df244d0caf05aedc1b790fca29cd02fdbba810fc90a219bab32afcb3",
	                    "EndpointID": "c032de70445ab8aa7fa6e42f3ed33666738d73429096f11ff6d7816e52abc659",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-770000 -n old-k8s-version-770000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-770000 -n old-k8s-version-770000: exit status 2 (374.478104ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p old-k8s-version-770000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p old-k8s-version-770000 logs -n 25: (1.4189602s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p calico-679000 sudo                                  | calico-679000          | jenkins | v1.31.0 | 17 Jul 23 16:01 PDT | 17 Jul 23 16:01 PDT |
	|         | containerd config dump                                 |                        |         |         |                     |                     |
	| ssh     | -p calico-679000 sudo                                  | calico-679000          | jenkins | v1.31.0 | 17 Jul 23 16:01 PDT |                     |
	|         | systemctl status crio --all                            |                        |         |         |                     |                     |
	|         | --full --no-pager                                      |                        |         |         |                     |                     |
	| ssh     | -p calico-679000 sudo                                  | calico-679000          | jenkins | v1.31.0 | 17 Jul 23 16:01 PDT | 17 Jul 23 16:01 PDT |
	|         | systemctl cat crio --no-pager                          |                        |         |         |                     |                     |
	| ssh     | -p calico-679000 sudo find                             | calico-679000          | jenkins | v1.31.0 | 17 Jul 23 16:01 PDT | 17 Jul 23 16:01 PDT |
	|         | /etc/crio -type f -exec sh -c                          |                        |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                        |         |         |                     |                     |
	| ssh     | -p calico-679000 sudo crio                             | calico-679000          | jenkins | v1.31.0 | 17 Jul 23 16:01 PDT | 17 Jul 23 16:01 PDT |
	|         | config                                                 |                        |         |         |                     |                     |
	| delete  | -p calico-679000                                       | calico-679000          | jenkins | v1.31.0 | 17 Jul 23 16:01 PDT | 17 Jul 23 16:01 PDT |
	| start   | -p no-preload-042000                                   | no-preload-042000      | jenkins | v1.31.0 | 17 Jul 23 16:01 PDT | 17 Jul 23 16:03 PDT |
	|         | --memory=2200                                          |                        |         |         |                     |                     |
	|         | --alsologtostderr                                      |                        |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                        |         |         |                     |                     |
	|         | --driver=docker                                        |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                           |                        |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-042000             | no-preload-042000      | jenkins | v1.31.0 | 17 Jul 23 16:03 PDT | 17 Jul 23 16:03 PDT |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                        |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                        |         |         |                     |                     |
	| stop    | -p no-preload-042000                                   | no-preload-042000      | jenkins | v1.31.0 | 17 Jul 23 16:03 PDT | 17 Jul 23 16:03 PDT |
	|         | --alsologtostderr -v=3                                 |                        |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-042000                  | no-preload-042000      | jenkins | v1.31.0 | 17 Jul 23 16:03 PDT | 17 Jul 23 16:03 PDT |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                        |         |         |                     |                     |
	| start   | -p no-preload-042000                                   | no-preload-042000      | jenkins | v1.31.0 | 17 Jul 23 16:03 PDT | 17 Jul 23 16:08 PDT |
	|         | --memory=2200                                          |                        |         |         |                     |                     |
	|         | --alsologtostderr                                      |                        |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                        |         |         |                     |                     |
	|         | --driver=docker                                        |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                           |                        |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-770000        | old-k8s-version-770000 | jenkins | v1.31.0 | 17 Jul 23 16:05 PDT |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                        |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                        |         |         |                     |                     |
	| stop    | -p old-k8s-version-770000                              | old-k8s-version-770000 | jenkins | v1.31.0 | 17 Jul 23 16:07 PDT | 17 Jul 23 16:07 PDT |
	|         | --alsologtostderr -v=3                                 |                        |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-770000             | old-k8s-version-770000 | jenkins | v1.31.0 | 17 Jul 23 16:07 PDT | 17 Jul 23 16:07 PDT |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                        |         |         |                     |                     |
	| start   | -p old-k8s-version-770000                              | old-k8s-version-770000 | jenkins | v1.31.0 | 17 Jul 23 16:07 PDT |                     |
	|         | --memory=2200                                          |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                        |         |         |                     |                     |
	|         | --kvm-network=default                                  |                        |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                        |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                        |         |         |                     |                     |
	|         | --keep-context=false                                   |                        |         |         |                     |                     |
	|         | --driver=docker                                        |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                        |         |         |                     |                     |
	| ssh     | -p no-preload-042000 sudo                              | no-preload-042000      | jenkins | v1.31.0 | 17 Jul 23 16:09 PDT | 17 Jul 23 16:09 PDT |
	|         | crictl images -o json                                  |                        |         |         |                     |                     |
	| pause   | -p no-preload-042000                                   | no-preload-042000      | jenkins | v1.31.0 | 17 Jul 23 16:09 PDT | 17 Jul 23 16:09 PDT |
	|         | --alsologtostderr -v=1                                 |                        |         |         |                     |                     |
	| unpause | -p no-preload-042000                                   | no-preload-042000      | jenkins | v1.31.0 | 17 Jul 23 16:09 PDT | 17 Jul 23 16:09 PDT |
	|         | --alsologtostderr -v=1                                 |                        |         |         |                     |                     |
	| delete  | -p no-preload-042000                                   | no-preload-042000      | jenkins | v1.31.0 | 17 Jul 23 16:09 PDT | 17 Jul 23 16:09 PDT |
	| delete  | -p no-preload-042000                                   | no-preload-042000      | jenkins | v1.31.0 | 17 Jul 23 16:09 PDT | 17 Jul 23 16:09 PDT |
	| start   | -p embed-certs-306000                                  | embed-certs-306000     | jenkins | v1.31.0 | 17 Jul 23 16:09 PDT | 17 Jul 23 16:10 PDT |
	|         | --memory=2200                                          |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                        |         |         |                     |                     |
	|         | --embed-certs --driver=docker                          |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                           |                        |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-306000            | embed-certs-306000     | jenkins | v1.31.0 | 17 Jul 23 16:10 PDT | 17 Jul 23 16:10 PDT |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                        |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                        |         |         |                     |                     |
	| stop    | -p embed-certs-306000                                  | embed-certs-306000     | jenkins | v1.31.0 | 17 Jul 23 16:10 PDT | 17 Jul 23 16:10 PDT |
	|         | --alsologtostderr -v=3                                 |                        |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-306000                 | embed-certs-306000     | jenkins | v1.31.0 | 17 Jul 23 16:10 PDT | 17 Jul 23 16:10 PDT |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                        |         |         |                     |                     |
	| start   | -p embed-certs-306000                                  | embed-certs-306000     | jenkins | v1.31.0 | 17 Jul 23 16:10 PDT |                     |
	|         | --memory=2200                                          |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                        |         |         |                     |                     |
	|         | --embed-certs --driver=docker                          |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                           |                        |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/07/17 16:10:45
	Running on machine: MacOS-Agent-1
	Binary: Built with gc go1.20.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 16:10:45.366617   94398 out.go:296] Setting OutFile to fd 1 ...
	I0717 16:10:45.366792   94398 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 16:10:45.366798   94398 out.go:309] Setting ErrFile to fd 2...
	I0717 16:10:45.366802   94398 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 16:10:45.366998   94398 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/16899-76867/.minikube/bin
	I0717 16:10:45.368373   94398 out.go:303] Setting JSON to false
	I0717 16:10:45.387629   94398 start.go:128] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":25813,"bootTime":1689609632,"procs":444,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0717 16:10:45.387718   94398 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0717 16:10:45.409561   94398 out.go:177] * [embed-certs-306000] minikube v1.31.0 on Darwin 13.4.1
	I0717 16:10:41.956757   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:10:41.969174   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 16:10:41.987852   93835 logs.go:284] 0 containers: []
	W0717 16:10:41.987865   93835 logs.go:286] No container was found matching "kube-apiserver"
	I0717 16:10:41.987935   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 16:10:42.008492   93835 logs.go:284] 0 containers: []
	W0717 16:10:42.008505   93835 logs.go:286] No container was found matching "etcd"
	I0717 16:10:42.008574   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 16:10:42.028014   93835 logs.go:284] 0 containers: []
	W0717 16:10:42.028026   93835 logs.go:286] No container was found matching "coredns"
	I0717 16:10:42.028095   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 16:10:42.048552   93835 logs.go:284] 0 containers: []
	W0717 16:10:42.048568   93835 logs.go:286] No container was found matching "kube-scheduler"
	I0717 16:10:42.048653   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 16:10:42.067162   93835 logs.go:284] 0 containers: []
	W0717 16:10:42.067175   93835 logs.go:286] No container was found matching "kube-proxy"
	I0717 16:10:42.067243   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 16:10:42.086634   93835 logs.go:284] 0 containers: []
	W0717 16:10:42.086647   93835 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 16:10:42.086725   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 16:10:42.106332   93835 logs.go:284] 0 containers: []
	W0717 16:10:42.106346   93835 logs.go:286] No container was found matching "kindnet"
	I0717 16:10:42.106411   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 16:10:42.126633   93835 logs.go:284] 0 containers: []
	W0717 16:10:42.126646   93835 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 16:10:42.126653   93835 logs.go:123] Gathering logs for kubelet ...
	I0717 16:10:42.126663   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 16:10:42.168227   93835 logs.go:123] Gathering logs for dmesg ...
	I0717 16:10:42.168241   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 16:10:42.182332   93835 logs.go:123] Gathering logs for describe nodes ...
	I0717 16:10:42.182362   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 16:10:42.239892   93835 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 16:10:42.239906   93835 logs.go:123] Gathering logs for Docker ...
	I0717 16:10:42.239916   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 16:10:42.256211   93835 logs.go:123] Gathering logs for container status ...
	I0717 16:10:42.256224   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 16:10:44.808911   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:10:44.829668   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 16:10:44.850398   93835 logs.go:284] 0 containers: []
	W0717 16:10:44.850411   93835 logs.go:286] No container was found matching "kube-apiserver"
	I0717 16:10:44.850476   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 16:10:44.871659   93835 logs.go:284] 0 containers: []
	W0717 16:10:44.871673   93835 logs.go:286] No container was found matching "etcd"
	I0717 16:10:44.871740   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 16:10:44.890585   93835 logs.go:284] 0 containers: []
	W0717 16:10:44.890598   93835 logs.go:286] No container was found matching "coredns"
	I0717 16:10:44.890666   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 16:10:44.911978   93835 logs.go:284] 0 containers: []
	W0717 16:10:44.911990   93835 logs.go:286] No container was found matching "kube-scheduler"
	I0717 16:10:44.912046   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 16:10:44.934710   93835 logs.go:284] 0 containers: []
	W0717 16:10:44.934725   93835 logs.go:286] No container was found matching "kube-proxy"
	I0717 16:10:44.934789   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 16:10:44.955886   93835 logs.go:284] 0 containers: []
	W0717 16:10:44.955900   93835 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 16:10:44.955963   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 16:10:44.977394   93835 logs.go:284] 0 containers: []
	W0717 16:10:44.977407   93835 logs.go:286] No container was found matching "kindnet"
	I0717 16:10:44.977475   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 16:10:44.999446   93835 logs.go:284] 0 containers: []
	W0717 16:10:44.999458   93835 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 16:10:44.999465   93835 logs.go:123] Gathering logs for kubelet ...
	I0717 16:10:44.999474   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 16:10:45.045012   93835 logs.go:123] Gathering logs for dmesg ...
	I0717 16:10:45.045030   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 16:10:45.060312   93835 logs.go:123] Gathering logs for describe nodes ...
	I0717 16:10:45.060330   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 16:10:45.122315   93835 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 16:10:45.122329   93835 logs.go:123] Gathering logs for Docker ...
	I0717 16:10:45.122336   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 16:10:45.138487   93835 logs.go:123] Gathering logs for container status ...
	I0717 16:10:45.138499   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 16:10:45.453470   94398 notify.go:220] Checking for updates...
	I0717 16:10:45.497208   94398 out.go:177]   - MINIKUBE_LOCATION=16899
	I0717 16:10:45.518048   94398 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16899-76867/kubeconfig
	I0717 16:10:45.539200   94398 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0717 16:10:45.560219   94398 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 16:10:45.602310   94398 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16899-76867/.minikube
	I0717 16:10:45.625505   94398 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 16:10:45.647836   94398 config.go:182] Loaded profile config "embed-certs-306000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0717 16:10:45.648582   94398 driver.go:373] Setting default libvirt URI to qemu:///system
	I0717 16:10:45.704509   94398 docker.go:121] docker version: linux-24.0.2:Docker Desktop 4.21.1 (114176)
	I0717 16:10:45.704676   94398 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 16:10:45.806169   94398 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:64 OomKillDisable:false NGoroutines:68 SystemTime:2023-07-17 23:10:45.79407701 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:5.15.49-linuxkit-pr OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSe
rverAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=builti
n name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Path
:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.6] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Command line tool for Docker Scout Vendor:Docker Inc. Version:0.16.1]] Warnings:<ni
l>}}
	I0717 16:10:45.849856   94398 out.go:177] * Using the docker driver based on existing profile
	I0717 16:10:45.871708   94398 start.go:298] selected driver: docker
	I0717 16:10:45.871737   94398 start.go:880] validating driver "docker" against &{Name:embed-certs-306000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:embed-certs-306000 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 16:10:45.871879   94398 start.go:891] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 16:10:45.876008   94398 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 16:10:45.979922   94398 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:64 OomKillDisable:false NGoroutines:68 SystemTime:2023-07-17 23:10:45.968037209 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:5.15.49-linuxkit-pr OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexS
erverAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=built
in name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Pat
h:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.6] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Command line tool for Docker Scout Vendor:Docker Inc. Version:0.16.1]] Warnings:<n
il>}}
	I0717 16:10:45.980151   94398 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0717 16:10:45.980175   94398 cni.go:84] Creating CNI manager for ""
	I0717 16:10:45.980187   94398 cni.go:149] "docker" driver + "docker" runtime found, recommending kindnet
	I0717 16:10:45.980200   94398 start_flags.go:319] config:
	{Name:embed-certs-306000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:embed-certs-306000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: Netwo
rkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize
:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 16:10:46.066390   94398 out.go:177] * Starting control plane node embed-certs-306000 in cluster embed-certs-306000
	I0717 16:10:46.110330   94398 cache.go:122] Beginning downloading kic base image for docker with docker
	I0717 16:10:46.131386   94398 out.go:177] * Pulling base image ...
	I0717 16:10:46.173274   94398 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0717 16:10:46.173306   94398 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon
	I0717 16:10:46.173389   94398 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16899-76867/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-amd64.tar.lz4
	I0717 16:10:46.173415   94398 cache.go:57] Caching tarball of preloaded images
	I0717 16:10:46.173642   94398 preload.go:174] Found /Users/jenkins/minikube-integration/16899-76867/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0717 16:10:46.174215   94398 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0717 16:10:46.174689   94398 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/embed-certs-306000/config.json ...
	I0717 16:10:46.223834   94398 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon, skipping pull
	I0717 16:10:46.223856   94398 cache.go:145] gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 exists in daemon, skipping load
	I0717 16:10:46.223874   94398 cache.go:195] Successfully downloaded all kic artifacts
	I0717 16:10:46.223912   94398 start.go:365] acquiring machines lock for embed-certs-306000: {Name:mk03af95ef6b011e5c7b759dc0d00d1db0b894cb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 16:10:46.224001   94398 start.go:369] acquired machines lock for "embed-certs-306000" in 70.419µs
	I0717 16:10:46.224031   94398 start.go:96] Skipping create...Using existing machine configuration
	I0717 16:10:46.224039   94398 fix.go:54] fixHost starting: 
	I0717 16:10:46.225042   94398 cli_runner.go:164] Run: docker container inspect embed-certs-306000 --format={{.State.Status}}
	I0717 16:10:46.277788   94398 fix.go:102] recreateIfNeeded on embed-certs-306000: state=Stopped err=<nil>
	W0717 16:10:46.277836   94398 fix.go:128] unexpected machine state, will restart: <nil>
	I0717 16:10:46.299675   94398 out.go:177] * Restarting existing docker container for "embed-certs-306000" ...
	I0717 16:10:46.320313   94398 cli_runner.go:164] Run: docker start embed-certs-306000
	I0717 16:10:46.575195   94398 cli_runner.go:164] Run: docker container inspect embed-certs-306000 --format={{.State.Status}}
	I0717 16:10:46.630864   94398 kic.go:426] container "embed-certs-306000" state is running.
	I0717 16:10:46.631662   94398 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-306000
	I0717 16:10:46.688508   94398 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/embed-certs-306000/config.json ...
	I0717 16:10:46.688900   94398 machine.go:88] provisioning docker machine ...
	I0717 16:10:46.688925   94398 ubuntu.go:169] provisioning hostname "embed-certs-306000"
	I0717 16:10:46.689001   94398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-306000
	I0717 16:10:46.747300   94398 main.go:141] libmachine: Using SSH client type: native
	I0717 16:10:46.747930   94398 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140d2c0] 0x1410360 <nil>  [] 0s} 127.0.0.1 57487 <nil> <nil>}
	I0717 16:10:46.747954   94398 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-306000 && echo "embed-certs-306000" | sudo tee /etc/hostname
	I0717 16:10:46.749069   94398 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0717 16:10:49.893033   94398 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-306000
	
	I0717 16:10:49.893123   94398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-306000
	I0717 16:10:49.943143   94398 main.go:141] libmachine: Using SSH client type: native
	I0717 16:10:49.943481   94398 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140d2c0] 0x1410360 <nil>  [] 0s} 127.0.0.1 57487 <nil> <nil>}
	I0717 16:10:49.943494   94398 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-306000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-306000/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-306000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 16:10:50.072140   94398 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 16:10:50.072161   94398 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/16899-76867/.minikube CaCertPath:/Users/jenkins/minikube-integration/16899-76867/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/16899-76867/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/16899-76867/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/16899-76867/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/16899-76867/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/16899-76867/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/16899-76867/.minikube}
	I0717 16:10:50.072187   94398 ubuntu.go:177] setting up certificates
	I0717 16:10:50.072196   94398 provision.go:83] configureAuth start
	I0717 16:10:50.072269   94398 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-306000
	I0717 16:10:50.124924   94398 provision.go:138] copyHostCerts
	I0717 16:10:50.125048   94398 exec_runner.go:144] found /Users/jenkins/minikube-integration/16899-76867/.minikube/key.pem, removing ...
	I0717 16:10:50.125060   94398 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/16899-76867/.minikube/key.pem
	I0717 16:10:50.125153   94398 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16899-76867/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/16899-76867/.minikube/key.pem (1675 bytes)
	I0717 16:10:50.125394   94398 exec_runner.go:144] found /Users/jenkins/minikube-integration/16899-76867/.minikube/ca.pem, removing ...
	I0717 16:10:50.125401   94398 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/16899-76867/.minikube/ca.pem
	I0717 16:10:50.125467   94398 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16899-76867/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/16899-76867/.minikube/ca.pem (1078 bytes)
	I0717 16:10:50.125646   94398 exec_runner.go:144] found /Users/jenkins/minikube-integration/16899-76867/.minikube/cert.pem, removing ...
	I0717 16:10:50.125652   94398 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/16899-76867/.minikube/cert.pem
	I0717 16:10:50.125712   94398 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16899-76867/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/16899-76867/.minikube/cert.pem (1123 bytes)
	I0717 16:10:50.125860   94398 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/16899-76867/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/16899-76867/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/16899-76867/.minikube/certs/ca-key.pem org=jenkins.embed-certs-306000 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube embed-certs-306000]
	I0717 16:10:50.186907   94398 provision.go:172] copyRemoteCerts
	I0717 16:10:50.186965   94398 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 16:10:50.187019   94398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-306000
	I0717 16:10:50.240044   94398 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57487 SSHKeyPath:/Users/jenkins/minikube-integration/16899-76867/.minikube/machines/embed-certs-306000/id_rsa Username:docker}
	I0717 16:10:50.336204   94398 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 16:10:50.357721   94398 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0717 16:10:47.698977   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:10:47.711225   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 16:10:47.730718   93835 logs.go:284] 0 containers: []
	W0717 16:10:47.730730   93835 logs.go:286] No container was found matching "kube-apiserver"
	I0717 16:10:47.730839   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 16:10:47.750638   93835 logs.go:284] 0 containers: []
	W0717 16:10:47.750651   93835 logs.go:286] No container was found matching "etcd"
	I0717 16:10:47.750723   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 16:10:47.771438   93835 logs.go:284] 0 containers: []
	W0717 16:10:47.771452   93835 logs.go:286] No container was found matching "coredns"
	I0717 16:10:47.771524   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 16:10:47.792644   93835 logs.go:284] 0 containers: []
	W0717 16:10:47.792658   93835 logs.go:286] No container was found matching "kube-scheduler"
	I0717 16:10:47.792731   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 16:10:47.813256   93835 logs.go:284] 0 containers: []
	W0717 16:10:47.813269   93835 logs.go:286] No container was found matching "kube-proxy"
	I0717 16:10:47.813345   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 16:10:47.832241   93835 logs.go:284] 0 containers: []
	W0717 16:10:47.832255   93835 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 16:10:47.832323   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 16:10:47.852377   93835 logs.go:284] 0 containers: []
	W0717 16:10:47.852390   93835 logs.go:286] No container was found matching "kindnet"
	I0717 16:10:47.852460   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 16:10:47.873676   93835 logs.go:284] 0 containers: []
	W0717 16:10:47.873688   93835 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 16:10:47.873695   93835 logs.go:123] Gathering logs for describe nodes ...
	I0717 16:10:47.873708   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 16:10:47.929530   93835 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 16:10:47.929573   93835 logs.go:123] Gathering logs for Docker ...
	I0717 16:10:47.929581   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 16:10:47.946139   93835 logs.go:123] Gathering logs for container status ...
	I0717 16:10:47.946152   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 16:10:47.998286   93835 logs.go:123] Gathering logs for kubelet ...
	I0717 16:10:47.998299   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 16:10:48.040170   93835 logs.go:123] Gathering logs for dmesg ...
	I0717 16:10:48.040185   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 16:10:50.379413   94398 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0717 16:10:50.415884   94398 provision.go:86] duration metric: configureAuth took 343.667635ms
	I0717 16:10:50.415899   94398 ubuntu.go:193] setting minikube options for container-runtime
	I0717 16:10:50.416116   94398 config.go:182] Loaded profile config "embed-certs-306000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0717 16:10:50.416204   94398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-306000
	I0717 16:10:50.466278   94398 main.go:141] libmachine: Using SSH client type: native
	I0717 16:10:50.466641   94398 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140d2c0] 0x1410360 <nil>  [] 0s} 127.0.0.1 57487 <nil> <nil>}
	I0717 16:10:50.466653   94398 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0717 16:10:50.595993   94398 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0717 16:10:50.596012   94398 ubuntu.go:71] root file system type: overlay
	I0717 16:10:50.596113   94398 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0717 16:10:50.596210   94398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-306000
	I0717 16:10:50.650316   94398 main.go:141] libmachine: Using SSH client type: native
	I0717 16:10:50.650680   94398 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140d2c0] 0x1410360 <nil>  [] 0s} 127.0.0.1 57487 <nil> <nil>}
	I0717 16:10:50.650731   94398 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0717 16:10:50.789152   94398 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0717 16:10:50.789248   94398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-306000
	I0717 16:10:50.847527   94398 main.go:141] libmachine: Using SSH client type: native
	I0717 16:10:50.847921   94398 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140d2c0] 0x1410360 <nil>  [] 0s} 127.0.0.1 57487 <nil> <nil>}
	I0717 16:10:50.847938   94398 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0717 16:10:50.984093   94398 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 16:10:50.984108   94398 machine.go:91] provisioned docker machine in 4.295150743s
	I0717 16:10:50.984120   94398 start.go:300] post-start starting for "embed-certs-306000" (driver="docker")
	I0717 16:10:50.984140   94398 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 16:10:50.984204   94398 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 16:10:50.984278   94398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-306000
	I0717 16:10:51.053975   94398 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57487 SSHKeyPath:/Users/jenkins/minikube-integration/16899-76867/.minikube/machines/embed-certs-306000/id_rsa Username:docker}
	I0717 16:10:51.148565   94398 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 16:10:51.152682   94398 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0717 16:10:51.152705   94398 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0717 16:10:51.152712   94398 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0717 16:10:51.152717   94398 info.go:137] Remote host: Ubuntu 22.04.2 LTS
	I0717 16:10:51.152725   94398 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16899-76867/.minikube/addons for local assets ...
	I0717 16:10:51.152821   94398 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16899-76867/.minikube/files for local assets ...
	I0717 16:10:51.153000   94398 filesync.go:149] local asset: /Users/jenkins/minikube-integration/16899-76867/.minikube/files/etc/ssl/certs/773242.pem -> 773242.pem in /etc/ssl/certs
	I0717 16:10:51.153185   94398 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 16:10:51.161738   94398 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/files/etc/ssl/certs/773242.pem --> /etc/ssl/certs/773242.pem (1708 bytes)
	I0717 16:10:51.183242   94398 start.go:303] post-start completed in 199.109779ms
	I0717 16:10:51.183400   94398 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 16:10:51.183491   94398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-306000
	I0717 16:10:51.234419   94398 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57487 SSHKeyPath:/Users/jenkins/minikube-integration/16899-76867/.minikube/machines/embed-certs-306000/id_rsa Username:docker}
	I0717 16:10:51.326370   94398 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0717 16:10:51.331727   94398 fix.go:56] fixHost completed within 5.107625459s
	I0717 16:10:51.331743   94398 start.go:83] releasing machines lock for "embed-certs-306000", held for 5.107676231s
	I0717 16:10:51.331843   94398 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-306000
	I0717 16:10:51.383335   94398 ssh_runner.go:195] Run: cat /version.json
	I0717 16:10:51.383361   94398 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 16:10:51.383410   94398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-306000
	I0717 16:10:51.383453   94398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-306000
	I0717 16:10:51.440025   94398 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57487 SSHKeyPath:/Users/jenkins/minikube-integration/16899-76867/.minikube/machines/embed-certs-306000/id_rsa Username:docker}
	I0717 16:10:51.440054   94398 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:57487 SSHKeyPath:/Users/jenkins/minikube-integration/16899-76867/.minikube/machines/embed-certs-306000/id_rsa Username:docker}
	I0717 16:10:51.530863   94398 ssh_runner.go:195] Run: systemctl --version
	I0717 16:10:51.649662   94398 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0717 16:10:51.655492   94398 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0717 16:10:51.673840   94398 cni.go:236] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0717 16:10:51.673943   94398 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 16:10:51.683377   94398 cni.go:265] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0717 16:10:51.683390   94398 start.go:466] detecting cgroup driver to use...
	I0717 16:10:51.683406   94398 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0717 16:10:51.683566   94398 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 16:10:51.699743   94398 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0717 16:10:51.709713   94398 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0717 16:10:51.719616   94398 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0717 16:10:51.719716   94398 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0717 16:10:51.729881   94398 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0717 16:10:51.740215   94398 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0717 16:10:51.750030   94398 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0717 16:10:51.760455   94398 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 16:10:51.769701   94398 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0717 16:10:51.780042   94398 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 16:10:51.788900   94398 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 16:10:51.797639   94398 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 16:10:51.864912   94398 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0717 16:10:51.945437   94398 start.go:466] detecting cgroup driver to use...
	I0717 16:10:51.945457   94398 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0717 16:10:51.945532   94398 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0717 16:10:51.958857   94398 cruntime.go:276] skipping containerd shutdown because we are bound to it
	I0717 16:10:51.958934   94398 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0717 16:10:51.972647   94398 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 16:10:51.994721   94398 ssh_runner.go:195] Run: which cri-dockerd
	I0717 16:10:51.999881   94398 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0717 16:10:52.009911   94398 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0717 16:10:52.029501   94398 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0717 16:10:52.136963   94398 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0717 16:10:52.235133   94398 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0717 16:10:52.235153   94398 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0717 16:10:52.255182   94398 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 16:10:52.348725   94398 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0717 16:10:52.648307   94398 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0717 16:10:52.724477   94398 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0717 16:10:52.797829   94398 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0717 16:10:52.869618   94398 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 16:10:52.942089   94398 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0717 16:10:52.955495   94398 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 16:10:53.033188   94398 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0717 16:10:53.110339   94398 start.go:513] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0717 16:10:53.110451   94398 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0717 16:10:53.115684   94398 start.go:534] Will wait 60s for crictl version
	I0717 16:10:53.115747   94398 ssh_runner.go:195] Run: which crictl
	I0717 16:10:53.120370   94398 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 16:10:53.168046   94398 start.go:550] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.4
	RuntimeApiVersion:  v1
	I0717 16:10:53.168128   94398 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0717 16:10:53.194243   94398 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0717 16:10:53.264147   94398 out.go:204] * Preparing Kubernetes v1.27.3 on Docker 24.0.4 ...
	I0717 16:10:53.264289   94398 cli_runner.go:164] Run: docker exec -t embed-certs-306000 dig +short host.docker.internal
	I0717 16:10:53.382752   94398 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I0717 16:10:53.382887   94398 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I0717 16:10:53.387999   94398 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 16:10:53.399178   94398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-306000
	I0717 16:10:53.453343   94398 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0717 16:10:53.453417   94398 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0717 16:10:53.473730   94398 docker.go:636] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.27.3
	registry.k8s.io/kube-scheduler:v1.27.3
	registry.k8s.io/kube-proxy:v1.27.3
	registry.k8s.io/kube-controller-manager:v1.27.3
	kindest/kindnetd:v20230511-dc714da8
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0717 16:10:53.473745   94398 docker.go:566] Images already preloaded, skipping extraction
	I0717 16:10:53.473812   94398 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0717 16:10:53.495640   94398 docker.go:636] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.27.3
	registry.k8s.io/kube-scheduler:v1.27.3
	registry.k8s.io/kube-controller-manager:v1.27.3
	registry.k8s.io/kube-proxy:v1.27.3
	kindest/kindnetd:v20230511-dc714da8
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0717 16:10:53.495661   94398 cache_images.go:84] Images are preloaded, skipping loading
	I0717 16:10:53.495767   94398 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0717 16:10:53.552633   94398 cni.go:84] Creating CNI manager for ""
	I0717 16:10:53.552649   94398 cni.go:149] "docker" driver + "docker" runtime found, recommending kindnet
	I0717 16:10:53.552670   94398 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0717 16:10:53.552696   94398 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.27.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-306000 NodeName:embed-certs-306000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 16:10:53.552850   94398 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "embed-certs-306000"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 16:10:53.552923   94398 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=embed-certs-306000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.3 ClusterName:embed-certs-306000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0717 16:10:53.552988   94398 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.3
	I0717 16:10:53.563213   94398 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 16:10:53.563302   94398 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 16:10:53.573908   94398 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0717 16:10:53.592858   94398 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 16:10:53.611462   94398 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2101 bytes)
	I0717 16:10:53.629080   94398 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0717 16:10:53.633770   94398 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 16:10:53.647422   94398 certs.go:56] Setting up /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/embed-certs-306000 for IP: 192.168.67.2
	I0717 16:10:53.647444   94398 certs.go:190] acquiring lock for shared ca certs: {Name:mk8dc1f2afa352f9c2168154d4ab47beda1b6a37 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 16:10:53.647640   94398 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/16899-76867/.minikube/ca.key
	I0717 16:10:53.647724   94398 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/16899-76867/.minikube/proxy-client-ca.key
	I0717 16:10:53.647855   94398 certs.go:315] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/embed-certs-306000/client.key
	I0717 16:10:53.647967   94398 certs.go:315] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/embed-certs-306000/apiserver.key.c7fa3a9e
	I0717 16:10:53.648052   94398 certs.go:315] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/embed-certs-306000/proxy-client.key
	I0717 16:10:53.648321   94398 certs.go:437] found cert: /Users/jenkins/minikube-integration/16899-76867/.minikube/certs/Users/jenkins/minikube-integration/16899-76867/.minikube/certs/77324.pem (1338 bytes)
	W0717 16:10:53.648367   94398 certs.go:433] ignoring /Users/jenkins/minikube-integration/16899-76867/.minikube/certs/Users/jenkins/minikube-integration/16899-76867/.minikube/certs/77324_empty.pem, impossibly tiny 0 bytes
	I0717 16:10:53.648381   94398 certs.go:437] found cert: /Users/jenkins/minikube-integration/16899-76867/.minikube/certs/Users/jenkins/minikube-integration/16899-76867/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 16:10:53.648429   94398 certs.go:437] found cert: /Users/jenkins/minikube-integration/16899-76867/.minikube/certs/Users/jenkins/minikube-integration/16899-76867/.minikube/certs/ca.pem (1078 bytes)
	I0717 16:10:53.648469   94398 certs.go:437] found cert: /Users/jenkins/minikube-integration/16899-76867/.minikube/certs/Users/jenkins/minikube-integration/16899-76867/.minikube/certs/cert.pem (1123 bytes)
	I0717 16:10:53.648510   94398 certs.go:437] found cert: /Users/jenkins/minikube-integration/16899-76867/.minikube/certs/Users/jenkins/minikube-integration/16899-76867/.minikube/certs/key.pem (1675 bytes)
	I0717 16:10:53.648587   94398 certs.go:437] found cert: /Users/jenkins/minikube-integration/16899-76867/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/16899-76867/.minikube/files/etc/ssl/certs/773242.pem (1708 bytes)
	I0717 16:10:53.649232   94398 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/embed-certs-306000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0717 16:10:53.673862   94398 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/embed-certs-306000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0717 16:10:53.697234   94398 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/embed-certs-306000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 16:10:53.721361   94398 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/embed-certs-306000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 16:10:53.747455   94398 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 16:10:53.772738   94398 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 16:10:53.799055   94398 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 16:10:53.825029   94398 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0717 16:10:53.848085   94398 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/certs/77324.pem --> /usr/share/ca-certificates/77324.pem (1338 bytes)
	I0717 16:10:53.870547   94398 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/files/etc/ssl/certs/773242.pem --> /usr/share/ca-certificates/773242.pem (1708 bytes)
	I0717 16:10:53.892795   94398 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 16:10:53.915316   94398 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 16:10:53.933105   94398 ssh_runner.go:195] Run: openssl version
	I0717 16:10:53.939169   94398 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/77324.pem && ln -fs /usr/share/ca-certificates/77324.pem /etc/ssl/certs/77324.pem"
	I0717 16:10:53.948870   94398 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/77324.pem
	I0717 16:10:53.953349   94398 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jul 17 22:13 /usr/share/ca-certificates/77324.pem
	I0717 16:10:53.953407   94398 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/77324.pem
	I0717 16:10:53.960403   94398 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/77324.pem /etc/ssl/certs/51391683.0"
	I0717 16:10:53.969425   94398 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/773242.pem && ln -fs /usr/share/ca-certificates/773242.pem /etc/ssl/certs/773242.pem"
	I0717 16:10:53.979568   94398 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/773242.pem
	I0717 16:10:53.984701   94398 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jul 17 22:13 /usr/share/ca-certificates/773242.pem
	I0717 16:10:53.984749   94398 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/773242.pem
	I0717 16:10:53.992034   94398 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/773242.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 16:10:54.001310   94398 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 16:10:54.011282   94398 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 16:10:54.015828   94398 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jul 17 22:07 /usr/share/ca-certificates/minikubeCA.pem
	I0717 16:10:54.015941   94398 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 16:10:54.023357   94398 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 16:10:54.032337   94398 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0717 16:10:54.036728   94398 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 16:10:54.044027   94398 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 16:10:54.051181   94398 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 16:10:54.058238   94398 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 16:10:54.064846   94398 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 16:10:54.071754   94398 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 16:10:54.078666   94398 kubeadm.go:404] StartCluster: {Name:embed-certs-306000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:embed-certs-306000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSD
omain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host
Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 16:10:54.078800   94398 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0717 16:10:54.098456   94398 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 16:10:54.107714   94398 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0717 16:10:54.107731   94398 kubeadm.go:636] restartCluster start
	I0717 16:10:54.107789   94398 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 16:10:54.116446   94398 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 16:10:54.116518   94398 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" embed-certs-306000
	I0717 16:10:54.168595   94398 kubeconfig.go:135] verify returned: extract IP: "embed-certs-306000" does not appear in /Users/jenkins/minikube-integration/16899-76867/kubeconfig
	I0717 16:10:54.168788   94398 kubeconfig.go:146] "embed-certs-306000" context is missing from /Users/jenkins/minikube-integration/16899-76867/kubeconfig - will repair!
	I0717 16:10:54.169120   94398 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16899-76867/kubeconfig: {Name:mk7ebdcff64e7ccd84e22cec95bc3c8ecbf54564 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 16:10:54.170815   94398 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 16:10:54.180431   94398 api_server.go:166] Checking apiserver status ...
	I0717 16:10:54.180549   94398 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 16:10:54.190921   94398 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 16:10:54.692245   94398 api_server.go:166] Checking apiserver status ...
	I0717 16:10:54.692386   94398 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 16:10:54.704575   94398 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 16:10:55.192630   94398 api_server.go:166] Checking apiserver status ...
	I0717 16:10:55.192799   94398 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 16:10:55.205143   94398 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 16:10:50.554547   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:10:50.564554   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 16:10:50.584366   93835 logs.go:284] 0 containers: []
	W0717 16:10:50.584381   93835 logs.go:286] No container was found matching "kube-apiserver"
	I0717 16:10:50.584450   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 16:10:50.604191   93835 logs.go:284] 0 containers: []
	W0717 16:10:50.604238   93835 logs.go:286] No container was found matching "etcd"
	I0717 16:10:50.604303   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 16:10:50.625875   93835 logs.go:284] 0 containers: []
	W0717 16:10:50.625888   93835 logs.go:286] No container was found matching "coredns"
	I0717 16:10:50.625948   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 16:10:50.647327   93835 logs.go:284] 0 containers: []
	W0717 16:10:50.647340   93835 logs.go:286] No container was found matching "kube-scheduler"
	I0717 16:10:50.647414   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 16:10:50.667716   93835 logs.go:284] 0 containers: []
	W0717 16:10:50.667729   93835 logs.go:286] No container was found matching "kube-proxy"
	I0717 16:10:50.667816   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 16:10:50.687381   93835 logs.go:284] 0 containers: []
	W0717 16:10:50.687394   93835 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 16:10:50.687465   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 16:10:50.709873   93835 logs.go:284] 0 containers: []
	W0717 16:10:50.709885   93835 logs.go:286] No container was found matching "kindnet"
	I0717 16:10:50.709954   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 16:10:50.730732   93835 logs.go:284] 0 containers: []
	W0717 16:10:50.730753   93835 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 16:10:50.730767   93835 logs.go:123] Gathering logs for container status ...
	I0717 16:10:50.730781   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 16:10:50.791974   93835 logs.go:123] Gathering logs for kubelet ...
	I0717 16:10:50.791993   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 16:10:50.838164   93835 logs.go:123] Gathering logs for dmesg ...
	I0717 16:10:50.838186   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 16:10:50.854855   93835 logs.go:123] Gathering logs for describe nodes ...
	I0717 16:10:50.854872   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 16:10:50.918143   93835 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 16:10:50.918157   93835 logs.go:123] Gathering logs for Docker ...
	I0717 16:10:50.918164   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 16:10:53.435419   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:10:53.449629   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 16:10:53.470900   93835 logs.go:284] 0 containers: []
	W0717 16:10:53.470913   93835 logs.go:286] No container was found matching "kube-apiserver"
	I0717 16:10:53.470982   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 16:10:53.491478   93835 logs.go:284] 0 containers: []
	W0717 16:10:53.491491   93835 logs.go:286] No container was found matching "etcd"
	I0717 16:10:53.491571   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 16:10:53.512355   93835 logs.go:284] 0 containers: []
	W0717 16:10:53.512370   93835 logs.go:286] No container was found matching "coredns"
	I0717 16:10:53.512442   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 16:10:53.533032   93835 logs.go:284] 0 containers: []
	W0717 16:10:53.533046   93835 logs.go:286] No container was found matching "kube-scheduler"
	I0717 16:10:53.533116   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 16:10:53.556595   93835 logs.go:284] 0 containers: []
	W0717 16:10:53.556606   93835 logs.go:286] No container was found matching "kube-proxy"
	I0717 16:10:53.556668   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 16:10:53.577834   93835 logs.go:284] 0 containers: []
	W0717 16:10:53.577847   93835 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 16:10:53.577916   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 16:10:53.601063   93835 logs.go:284] 0 containers: []
	W0717 16:10:53.601075   93835 logs.go:286] No container was found matching "kindnet"
	I0717 16:10:53.601147   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 16:10:53.621727   93835 logs.go:284] 0 containers: []
	W0717 16:10:53.621741   93835 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 16:10:53.621748   93835 logs.go:123] Gathering logs for kubelet ...
	I0717 16:10:53.621757   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 16:10:53.664905   93835 logs.go:123] Gathering logs for dmesg ...
	I0717 16:10:53.664923   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 16:10:53.680164   93835 logs.go:123] Gathering logs for describe nodes ...
	I0717 16:10:53.680178   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 16:10:53.744960   93835 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 16:10:53.744973   93835 logs.go:123] Gathering logs for Docker ...
	I0717 16:10:53.744981   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 16:10:53.762699   93835 logs.go:123] Gathering logs for container status ...
	I0717 16:10:53.762716   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 16:10:55.692517   94398 api_server.go:166] Checking apiserver status ...
	I0717 16:10:55.692629   94398 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 16:10:55.704864   94398 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 16:10:56.192293   94398 api_server.go:166] Checking apiserver status ...
	I0717 16:10:56.192406   94398 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 16:10:56.204474   94398 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 16:10:56.691285   94398 api_server.go:166] Checking apiserver status ...
	I0717 16:10:56.691412   94398 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 16:10:56.701956   94398 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 16:10:57.193067   94398 api_server.go:166] Checking apiserver status ...
	I0717 16:10:57.193188   94398 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 16:10:57.205267   94398 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 16:10:57.693121   94398 api_server.go:166] Checking apiserver status ...
	I0717 16:10:57.693260   94398 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 16:10:57.706913   94398 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 16:10:58.191082   94398 api_server.go:166] Checking apiserver status ...
	I0717 16:10:58.191227   94398 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 16:10:58.203260   94398 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 16:10:58.692045   94398 api_server.go:166] Checking apiserver status ...
	I0717 16:10:58.692199   94398 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 16:10:58.705022   94398 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 16:10:59.191047   94398 api_server.go:166] Checking apiserver status ...
	I0717 16:10:59.191109   94398 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 16:10:59.201775   94398 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 16:10:59.693186   94398 api_server.go:166] Checking apiserver status ...
	I0717 16:10:59.693318   94398 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 16:10:59.705865   94398 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 16:11:00.191664   94398 api_server.go:166] Checking apiserver status ...
	I0717 16:11:00.191801   94398 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 16:11:00.204356   94398 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 16:10:56.325331   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:10:56.337715   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 16:10:56.357505   93835 logs.go:284] 0 containers: []
	W0717 16:10:56.357518   93835 logs.go:286] No container was found matching "kube-apiserver"
	I0717 16:10:56.357587   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 16:10:56.375675   93835 logs.go:284] 0 containers: []
	W0717 16:10:56.375687   93835 logs.go:286] No container was found matching "etcd"
	I0717 16:10:56.375757   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 16:10:56.394743   93835 logs.go:284] 0 containers: []
	W0717 16:10:56.394757   93835 logs.go:286] No container was found matching "coredns"
	I0717 16:10:56.394824   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 16:10:56.413787   93835 logs.go:284] 0 containers: []
	W0717 16:10:56.413799   93835 logs.go:286] No container was found matching "kube-scheduler"
	I0717 16:10:56.413889   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 16:10:56.434849   93835 logs.go:284] 0 containers: []
	W0717 16:10:56.434870   93835 logs.go:286] No container was found matching "kube-proxy"
	I0717 16:10:56.434930   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 16:10:56.455388   93835 logs.go:284] 0 containers: []
	W0717 16:10:56.455409   93835 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 16:10:56.455471   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 16:10:56.475473   93835 logs.go:284] 0 containers: []
	W0717 16:10:56.475486   93835 logs.go:286] No container was found matching "kindnet"
	I0717 16:10:56.475565   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 16:10:56.494828   93835 logs.go:284] 0 containers: []
	W0717 16:10:56.494842   93835 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 16:10:56.494850   93835 logs.go:123] Gathering logs for describe nodes ...
	I0717 16:10:56.494860   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 16:10:56.552578   93835 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 16:10:56.552591   93835 logs.go:123] Gathering logs for Docker ...
	I0717 16:10:56.552598   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 16:10:56.569742   93835 logs.go:123] Gathering logs for container status ...
	I0717 16:10:56.569757   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 16:10:56.621258   93835 logs.go:123] Gathering logs for kubelet ...
	I0717 16:10:56.621272   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 16:10:56.662749   93835 logs.go:123] Gathering logs for dmesg ...
	I0717 16:10:56.662766   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 16:10:59.179075   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:10:59.191143   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 16:10:59.210631   93835 logs.go:284] 0 containers: []
	W0717 16:10:59.210642   93835 logs.go:286] No container was found matching "kube-apiserver"
	I0717 16:10:59.210711   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 16:10:59.228456   93835 logs.go:284] 0 containers: []
	W0717 16:10:59.228469   93835 logs.go:286] No container was found matching "etcd"
	I0717 16:10:59.228536   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 16:10:59.247316   93835 logs.go:284] 0 containers: []
	W0717 16:10:59.247329   93835 logs.go:286] No container was found matching "coredns"
	I0717 16:10:59.247398   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 16:10:59.266948   93835 logs.go:284] 0 containers: []
	W0717 16:10:59.266962   93835 logs.go:286] No container was found matching "kube-scheduler"
	I0717 16:10:59.267031   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 16:10:59.286969   93835 logs.go:284] 0 containers: []
	W0717 16:10:59.286981   93835 logs.go:286] No container was found matching "kube-proxy"
	I0717 16:10:59.287047   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 16:10:59.306193   93835 logs.go:284] 0 containers: []
	W0717 16:10:59.306207   93835 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 16:10:59.306273   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 16:10:59.326492   93835 logs.go:284] 0 containers: []
	W0717 16:10:59.326505   93835 logs.go:286] No container was found matching "kindnet"
	I0717 16:10:59.326576   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 16:10:59.347147   93835 logs.go:284] 0 containers: []
	W0717 16:10:59.347161   93835 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 16:10:59.347168   93835 logs.go:123] Gathering logs for container status ...
	I0717 16:10:59.347175   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 16:10:59.400761   93835 logs.go:123] Gathering logs for kubelet ...
	I0717 16:10:59.400776   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 16:10:59.442358   93835 logs.go:123] Gathering logs for dmesg ...
	I0717 16:10:59.442373   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 16:10:59.457274   93835 logs.go:123] Gathering logs for describe nodes ...
	I0717 16:10:59.457289   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 16:10:59.515710   93835 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 16:10:59.515721   93835 logs.go:123] Gathering logs for Docker ...
	I0717 16:10:59.515755   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 16:11:00.691397   94398 api_server.go:166] Checking apiserver status ...
	I0717 16:11:00.691554   94398 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 16:11:00.703825   94398 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 16:11:01.191112   94398 api_server.go:166] Checking apiserver status ...
	I0717 16:11:01.191300   94398 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 16:11:01.203318   94398 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 16:11:01.693130   94398 api_server.go:166] Checking apiserver status ...
	I0717 16:11:01.693312   94398 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 16:11:01.705708   94398 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 16:11:02.192052   94398 api_server.go:166] Checking apiserver status ...
	I0717 16:11:02.192118   94398 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 16:11:02.203580   94398 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 16:11:02.692049   94398 api_server.go:166] Checking apiserver status ...
	I0717 16:11:02.692165   94398 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 16:11:02.704417   94398 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 16:11:03.193278   94398 api_server.go:166] Checking apiserver status ...
	I0717 16:11:03.193383   94398 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 16:11:03.205697   94398 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 16:11:03.693154   94398 api_server.go:166] Checking apiserver status ...
	I0717 16:11:03.693341   94398 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 16:11:03.705599   94398 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 16:11:04.182651   94398 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0717 16:11:04.182719   94398 kubeadm.go:1128] stopping kube-system containers ...
	I0717 16:11:04.182909   94398 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0717 16:11:04.206180   94398 docker.go:462] Stopping containers: [5a5cf379fb85 000c54de18b7 f0dcc48f2d2f 96e4e7a425ff bdaee0c89b1b 5a8b231c079f 9598322a449f 8a70d992b36a 2f51f21a5ad3 7b7a805cd5d6 50f1ad1878cd 658cfc31a89e f9c108edf1ad 6a299bdfe611 727b19888fb1 e766dcb647fd 5e3eaafb1ed1 35aa25f0c154 19d37b08ecd8 c13cd35000bf]
	I0717 16:11:04.206270   94398 ssh_runner.go:195] Run: docker stop 5a5cf379fb85 000c54de18b7 f0dcc48f2d2f 96e4e7a425ff bdaee0c89b1b 5a8b231c079f 9598322a449f 8a70d992b36a 2f51f21a5ad3 7b7a805cd5d6 50f1ad1878cd 658cfc31a89e f9c108edf1ad 6a299bdfe611 727b19888fb1 e766dcb647fd 5e3eaafb1ed1 35aa25f0c154 19d37b08ecd8 c13cd35000bf
	I0717 16:11:04.228272   94398 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0717 16:11:04.240130   94398 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 16:11:04.249723   94398 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Jul 17 23:09 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Jul 17 23:09 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2011 Jul 17 23:09 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Jul 17 23:09 /etc/kubernetes/scheduler.conf
	
	I0717 16:11:04.249793   94398 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 16:11:04.259097   94398 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 16:11:04.269949   94398 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 16:11:04.279269   94398 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0717 16:11:04.279333   94398 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 16:11:04.289105   94398 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 16:11:04.299027   94398 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0717 16:11:04.299092   94398 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 16:11:04.308742   94398 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 16:11:04.318398   94398 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0717 16:11:04.318412   94398 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 16:11:04.371840   94398 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 16:11:05.171642   94398 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0717 16:11:05.318425   94398 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 16:11:02.032718   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:11:02.044102   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 16:11:02.064317   93835 logs.go:284] 0 containers: []
	W0717 16:11:02.064331   93835 logs.go:286] No container was found matching "kube-apiserver"
	I0717 16:11:02.064399   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 16:11:02.090663   93835 logs.go:284] 0 containers: []
	W0717 16:11:02.090677   93835 logs.go:286] No container was found matching "etcd"
	I0717 16:11:02.090745   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 16:11:02.110134   93835 logs.go:284] 0 containers: []
	W0717 16:11:02.110147   93835 logs.go:286] No container was found matching "coredns"
	I0717 16:11:02.110219   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 16:11:02.130420   93835 logs.go:284] 0 containers: []
	W0717 16:11:02.130434   93835 logs.go:286] No container was found matching "kube-scheduler"
	I0717 16:11:02.130501   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 16:11:02.150045   93835 logs.go:284] 0 containers: []
	W0717 16:11:02.150058   93835 logs.go:286] No container was found matching "kube-proxy"
	I0717 16:11:02.150127   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 16:11:02.169243   93835 logs.go:284] 0 containers: []
	W0717 16:11:02.169257   93835 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 16:11:02.169327   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 16:11:02.188241   93835 logs.go:284] 0 containers: []
	W0717 16:11:02.188255   93835 logs.go:286] No container was found matching "kindnet"
	I0717 16:11:02.188323   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 16:11:02.209048   93835 logs.go:284] 0 containers: []
	W0717 16:11:02.209060   93835 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 16:11:02.209067   93835 logs.go:123] Gathering logs for kubelet ...
	I0717 16:11:02.209074   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 16:11:02.249643   93835 logs.go:123] Gathering logs for dmesg ...
	I0717 16:11:02.249658   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 16:11:02.263819   93835 logs.go:123] Gathering logs for describe nodes ...
	I0717 16:11:02.263835   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 16:11:02.320577   93835 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 16:11:02.320589   93835 logs.go:123] Gathering logs for Docker ...
	I0717 16:11:02.320597   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 16:11:02.336999   93835 logs.go:123] Gathering logs for container status ...
	I0717 16:11:02.337013   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 16:11:04.889945   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:11:04.901352   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 16:11:04.921897   93835 logs.go:284] 0 containers: []
	W0717 16:11:04.921911   93835 logs.go:286] No container was found matching "kube-apiserver"
	I0717 16:11:04.921997   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 16:11:04.942258   93835 logs.go:284] 0 containers: []
	W0717 16:11:04.942271   93835 logs.go:286] No container was found matching "etcd"
	I0717 16:11:04.942340   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 16:11:04.962001   93835 logs.go:284] 0 containers: []
	W0717 16:11:04.962015   93835 logs.go:286] No container was found matching "coredns"
	I0717 16:11:04.962097   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 16:11:04.982082   93835 logs.go:284] 0 containers: []
	W0717 16:11:04.982096   93835 logs.go:286] No container was found matching "kube-scheduler"
	I0717 16:11:04.982169   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 16:11:05.003819   93835 logs.go:284] 0 containers: []
	W0717 16:11:05.003834   93835 logs.go:286] No container was found matching "kube-proxy"
	I0717 16:11:05.003903   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 16:11:05.025602   93835 logs.go:284] 0 containers: []
	W0717 16:11:05.025619   93835 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 16:11:05.025699   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 16:11:05.046687   93835 logs.go:284] 0 containers: []
	W0717 16:11:05.046704   93835 logs.go:286] No container was found matching "kindnet"
	I0717 16:11:05.046787   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 16:11:05.098791   93835 logs.go:284] 0 containers: []
	W0717 16:11:05.098805   93835 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 16:11:05.098812   93835 logs.go:123] Gathering logs for container status ...
	I0717 16:11:05.098820   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 16:11:05.153430   93835 logs.go:123] Gathering logs for kubelet ...
	I0717 16:11:05.153446   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 16:11:05.197431   93835 logs.go:123] Gathering logs for dmesg ...
	I0717 16:11:05.197452   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 16:11:05.212791   93835 logs.go:123] Gathering logs for describe nodes ...
	I0717 16:11:05.212808   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 16:11:05.278635   93835 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 16:11:05.278662   93835 logs.go:123] Gathering logs for Docker ...
	I0717 16:11:05.278685   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 16:11:05.372555   94398 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0717 16:11:05.504668   94398 api_server.go:52] waiting for apiserver process to appear ...
	I0717 16:11:05.504752   94398 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:11:06.021255   94398 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:11:06.521042   94398 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:11:06.591195   94398 api_server.go:72] duration metric: took 1.086515145s to wait for apiserver process to appear ...
	I0717 16:11:06.591213   94398 api_server.go:88] waiting for apiserver healthz status ...
	I0717 16:11:06.591229   94398 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:57491/healthz ...
	I0717 16:11:06.592934   94398 api_server.go:269] stopped: https://127.0.0.1:57491/healthz: Get "https://127.0.0.1:57491/healthz": EOF
	I0717 16:11:07.093597   94398 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:57491/healthz ...
	I0717 16:11:08.742549   94398 api_server.go:279] https://127.0.0.1:57491/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 16:11:08.742572   94398 api_server.go:103] status: https://127.0.0.1:57491/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 16:11:08.742585   94398 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:57491/healthz ...
	I0717 16:11:08.804806   94398 api_server.go:279] https://127.0.0.1:57491/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0717 16:11:08.804840   94398 api_server.go:103] status: https://127.0.0.1:57491/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0717 16:11:09.093129   94398 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:57491/healthz ...
	I0717 16:11:09.100041   94398 api_server.go:279] https://127.0.0.1:57491/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0717 16:11:09.100059   94398 api_server.go:103] status: https://127.0.0.1:57491/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0717 16:11:09.593099   94398 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:57491/healthz ...
	I0717 16:11:09.599287   94398 api_server.go:279] https://127.0.0.1:57491/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0717 16:11:09.599303   94398 api_server.go:103] status: https://127.0.0.1:57491/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0717 16:11:10.093230   94398 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:57491/healthz ...
	I0717 16:11:10.100387   94398 api_server.go:279] https://127.0.0.1:57491/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0717 16:11:10.100409   94398 api_server.go:103] status: https://127.0.0.1:57491/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0717 16:11:07.796037   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:11:07.808044   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 16:11:07.836822   93835 logs.go:284] 0 containers: []
	W0717 16:11:07.836841   93835 logs.go:286] No container was found matching "kube-apiserver"
	I0717 16:11:07.836939   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 16:11:07.857875   93835 logs.go:284] 0 containers: []
	W0717 16:11:07.857890   93835 logs.go:286] No container was found matching "etcd"
	I0717 16:11:07.857958   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 16:11:07.878096   93835 logs.go:284] 0 containers: []
	W0717 16:11:07.878109   93835 logs.go:286] No container was found matching "coredns"
	I0717 16:11:07.878174   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 16:11:07.897490   93835 logs.go:284] 0 containers: []
	W0717 16:11:07.897509   93835 logs.go:286] No container was found matching "kube-scheduler"
	I0717 16:11:07.897595   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 16:11:07.919044   93835 logs.go:284] 0 containers: []
	W0717 16:11:07.919058   93835 logs.go:286] No container was found matching "kube-proxy"
	I0717 16:11:07.919157   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 16:11:07.942026   93835 logs.go:284] 0 containers: []
	W0717 16:11:07.942044   93835 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 16:11:07.942122   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 16:11:07.966706   93835 logs.go:284] 0 containers: []
	W0717 16:11:07.966736   93835 logs.go:286] No container was found matching "kindnet"
	I0717 16:11:07.966820   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 16:11:07.987555   93835 logs.go:284] 0 containers: []
	W0717 16:11:07.987568   93835 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 16:11:07.987575   93835 logs.go:123] Gathering logs for container status ...
	I0717 16:11:07.987583   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 16:11:08.055799   93835 logs.go:123] Gathering logs for kubelet ...
	I0717 16:11:08.055814   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 16:11:08.120776   93835 logs.go:123] Gathering logs for dmesg ...
	I0717 16:11:08.120799   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 16:11:08.137139   93835 logs.go:123] Gathering logs for describe nodes ...
	I0717 16:11:08.137156   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 16:11:08.203696   93835 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 16:11:08.203709   93835 logs.go:123] Gathering logs for Docker ...
	I0717 16:11:08.203717   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 16:11:10.593333   94398 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:57491/healthz ...
	I0717 16:11:10.603606   94398 api_server.go:279] https://127.0.0.1:57491/healthz returned 200:
	ok
	I0717 16:11:10.613041   94398 api_server.go:141] control plane version: v1.27.3
	I0717 16:11:10.613063   94398 api_server.go:131] duration metric: took 4.02179656s to wait for apiserver health ...
	I0717 16:11:10.613076   94398 cni.go:84] Creating CNI manager for ""
	I0717 16:11:10.613090   94398 cni.go:149] "docker" driver + "docker" runtime found, recommending kindnet
	I0717 16:11:10.637095   94398 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0717 16:11:10.659032   94398 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0717 16:11:10.693715   94398 cni.go:188] applying CNI manifest using /var/lib/minikube/binaries/v1.27.3/kubectl ...
	I0717 16:11:10.693732   94398 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0717 16:11:10.722864   94398 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0717 16:11:11.509141   94398 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 16:11:11.516511   94398 system_pods.go:59] 9 kube-system pods found
	I0717 16:11:11.516530   94398 system_pods.go:61] "coredns-5d78c9869d-t79mm" [0aca79f4-a6e5-43ca-8735-d2195d76b2bf] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0717 16:11:11.516536   94398 system_pods.go:61] "etcd-embed-certs-306000" [b8252e50-ac12-4efd-ad37-ae3244cfd38e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0717 16:11:11.516541   94398 system_pods.go:61] "kindnet-nzmvl" [1ab1cb65-4b30-482a-959e-6043ff8c94b2] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0717 16:11:11.516551   94398 system_pods.go:61] "kube-apiserver-embed-certs-306000" [61f90f6b-d149-486a-a4bc-865abc02093e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0717 16:11:11.516557   94398 system_pods.go:61] "kube-controller-manager-embed-certs-306000" [12b82e75-848f-4572-9fea-6d257f1fc00a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0717 16:11:11.516562   94398 system_pods.go:61] "kube-proxy-6vxcp" [ed53de26-91ef-4257-bc36-a31b7744a2d1] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0717 16:11:11.516567   94398 system_pods.go:61] "kube-scheduler-embed-certs-306000" [2238f0f4-b0e2-433a-9eca-232370091286] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0717 16:11:11.516575   94398 system_pods.go:61] "metrics-server-74d5c6b9c-t6m6d" [6f251524-deca-4a6c-8138-270edbed228d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 16:11:11.516581   94398 system_pods.go:61] "storage-provisioner" [3b81b920-40a9-4724-8255-f1df6ec57381] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0717 16:11:11.516585   94398 system_pods.go:74] duration metric: took 7.432724ms to wait for pod list to return data ...
	I0717 16:11:11.516593   94398 node_conditions.go:102] verifying NodePressure condition ...
	I0717 16:11:11.519945   94398 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I0717 16:11:11.519959   94398 node_conditions.go:123] node cpu capacity is 6
	I0717 16:11:11.519976   94398 node_conditions.go:105] duration metric: took 3.37915ms to run NodePressure ...
	I0717 16:11:11.519992   94398 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 16:11:11.662653   94398 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0717 16:11:11.667167   94398 kubeadm.go:787] kubelet initialised
	I0717 16:11:11.667179   94398 kubeadm.go:788] duration metric: took 4.512781ms waiting for restarted kubelet to initialise ...
	I0717 16:11:11.667186   94398 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0717 16:11:11.673015   94398 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5d78c9869d-t79mm" in "kube-system" namespace to be "Ready" ...
	I0717 16:11:13.684578   94398 pod_ready.go:102] pod "coredns-5d78c9869d-t79mm" in "kube-system" namespace has status "Ready":"False"
	I0717 16:11:10.722294   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:11:10.737105   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 16:11:10.757262   93835 logs.go:284] 0 containers: []
	W0717 16:11:10.757277   93835 logs.go:286] No container was found matching "kube-apiserver"
	I0717 16:11:10.757346   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 16:11:10.776587   93835 logs.go:284] 0 containers: []
	W0717 16:11:10.776600   93835 logs.go:286] No container was found matching "etcd"
	I0717 16:11:10.776667   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 16:11:10.797364   93835 logs.go:284] 0 containers: []
	W0717 16:11:10.797384   93835 logs.go:286] No container was found matching "coredns"
	I0717 16:11:10.797484   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 16:11:10.820241   93835 logs.go:284] 0 containers: []
	W0717 16:11:10.820260   93835 logs.go:286] No container was found matching "kube-scheduler"
	I0717 16:11:10.820369   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 16:11:10.845565   93835 logs.go:284] 0 containers: []
	W0717 16:11:10.845579   93835 logs.go:286] No container was found matching "kube-proxy"
	I0717 16:11:10.845652   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 16:11:10.865746   93835 logs.go:284] 0 containers: []
	W0717 16:11:10.865759   93835 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 16:11:10.865850   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 16:11:10.884879   93835 logs.go:284] 0 containers: []
	W0717 16:11:10.884894   93835 logs.go:286] No container was found matching "kindnet"
	I0717 16:11:10.884960   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 16:11:10.904361   93835 logs.go:284] 0 containers: []
	W0717 16:11:10.904375   93835 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 16:11:10.904382   93835 logs.go:123] Gathering logs for Docker ...
	I0717 16:11:10.904391   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 16:11:10.924221   93835 logs.go:123] Gathering logs for container status ...
	I0717 16:11:10.924240   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 16:11:10.982436   93835 logs.go:123] Gathering logs for kubelet ...
	I0717 16:11:10.982450   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 16:11:11.029474   93835 logs.go:123] Gathering logs for dmesg ...
	I0717 16:11:11.029495   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 16:11:11.046370   93835 logs.go:123] Gathering logs for describe nodes ...
	I0717 16:11:11.046392   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 16:11:11.134328   93835 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 16:11:13.635957   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:11:13.648062   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 16:11:13.667230   93835 logs.go:284] 0 containers: []
	W0717 16:11:13.667243   93835 logs.go:286] No container was found matching "kube-apiserver"
	I0717 16:11:13.667310   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 16:11:13.687993   93835 logs.go:284] 0 containers: []
	W0717 16:11:13.688004   93835 logs.go:286] No container was found matching "etcd"
	I0717 16:11:13.688078   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 16:11:13.707557   93835 logs.go:284] 0 containers: []
	W0717 16:11:13.707569   93835 logs.go:286] No container was found matching "coredns"
	I0717 16:11:13.707635   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 16:11:13.725850   93835 logs.go:284] 0 containers: []
	W0717 16:11:13.725866   93835 logs.go:286] No container was found matching "kube-scheduler"
	I0717 16:11:13.725936   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 16:11:13.746800   93835 logs.go:284] 0 containers: []
	W0717 16:11:13.746814   93835 logs.go:286] No container was found matching "kube-proxy"
	I0717 16:11:13.746884   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 16:11:13.766155   93835 logs.go:284] 0 containers: []
	W0717 16:11:13.766170   93835 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 16:11:13.766239   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 16:11:13.785520   93835 logs.go:284] 0 containers: []
	W0717 16:11:13.785533   93835 logs.go:286] No container was found matching "kindnet"
	I0717 16:11:13.785602   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 16:11:13.804423   93835 logs.go:284] 0 containers: []
	W0717 16:11:13.804437   93835 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 16:11:13.804444   93835 logs.go:123] Gathering logs for kubelet ...
	I0717 16:11:13.804451   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 16:11:13.846303   93835 logs.go:123] Gathering logs for dmesg ...
	I0717 16:11:13.846320   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 16:11:13.860235   93835 logs.go:123] Gathering logs for describe nodes ...
	I0717 16:11:13.860251   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 16:11:13.919154   93835 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 16:11:13.919168   93835 logs.go:123] Gathering logs for Docker ...
	I0717 16:11:13.919176   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 16:11:13.935475   93835 logs.go:123] Gathering logs for container status ...
	I0717 16:11:13.935489   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 16:11:15.697359   94398 pod_ready.go:102] pod "coredns-5d78c9869d-t79mm" in "kube-system" namespace has status "Ready":"False"
	I0717 16:11:18.186734   94398 pod_ready.go:102] pod "coredns-5d78c9869d-t79mm" in "kube-system" namespace has status "Ready":"False"
	I0717 16:11:20.187931   94398 pod_ready.go:102] pod "coredns-5d78c9869d-t79mm" in "kube-system" namespace has status "Ready":"False"
	I0717 16:11:16.489224   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:11:16.502116   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 16:11:16.521668   93835 logs.go:284] 0 containers: []
	W0717 16:11:16.521681   93835 logs.go:286] No container was found matching "kube-apiserver"
	I0717 16:11:16.521749   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 16:11:16.541973   93835 logs.go:284] 0 containers: []
	W0717 16:11:16.541986   93835 logs.go:286] No container was found matching "etcd"
	I0717 16:11:16.542050   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 16:11:16.561343   93835 logs.go:284] 0 containers: []
	W0717 16:11:16.561355   93835 logs.go:286] No container was found matching "coredns"
	I0717 16:11:16.561422   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 16:11:16.580112   93835 logs.go:284] 0 containers: []
	W0717 16:11:16.580126   93835 logs.go:286] No container was found matching "kube-scheduler"
	I0717 16:11:16.580193   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 16:11:16.599380   93835 logs.go:284] 0 containers: []
	W0717 16:11:16.599393   93835 logs.go:286] No container was found matching "kube-proxy"
	I0717 16:11:16.599465   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 16:11:16.619800   93835 logs.go:284] 0 containers: []
	W0717 16:11:16.619813   93835 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 16:11:16.619883   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 16:11:16.638556   93835 logs.go:284] 0 containers: []
	W0717 16:11:16.638570   93835 logs.go:286] No container was found matching "kindnet"
	I0717 16:11:16.638638   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 16:11:16.662372   93835 logs.go:284] 0 containers: []
	W0717 16:11:16.662388   93835 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 16:11:16.662397   93835 logs.go:123] Gathering logs for container status ...
	I0717 16:11:16.662405   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 16:11:16.717464   93835 logs.go:123] Gathering logs for kubelet ...
	I0717 16:11:16.717481   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 16:11:16.759570   93835 logs.go:123] Gathering logs for dmesg ...
	I0717 16:11:16.759585   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 16:11:16.773696   93835 logs.go:123] Gathering logs for describe nodes ...
	I0717 16:11:16.773712   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 16:11:16.832301   93835 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 16:11:16.832315   93835 logs.go:123] Gathering logs for Docker ...
	I0717 16:11:16.832322   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 16:11:19.349387   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:11:19.360398   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 16:11:19.379799   93835 logs.go:284] 0 containers: []
	W0717 16:11:19.379813   93835 logs.go:286] No container was found matching "kube-apiserver"
	I0717 16:11:19.379881   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 16:11:19.400137   93835 logs.go:284] 0 containers: []
	W0717 16:11:19.400150   93835 logs.go:286] No container was found matching "etcd"
	I0717 16:11:19.400221   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 16:11:19.420120   93835 logs.go:284] 0 containers: []
	W0717 16:11:19.420135   93835 logs.go:286] No container was found matching "coredns"
	I0717 16:11:19.420202   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 16:11:19.440652   93835 logs.go:284] 0 containers: []
	W0717 16:11:19.440666   93835 logs.go:286] No container was found matching "kube-scheduler"
	I0717 16:11:19.440739   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 16:11:19.461061   93835 logs.go:284] 0 containers: []
	W0717 16:11:19.461076   93835 logs.go:286] No container was found matching "kube-proxy"
	I0717 16:11:19.461170   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 16:11:19.479965   93835 logs.go:284] 0 containers: []
	W0717 16:11:19.479979   93835 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 16:11:19.480046   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 16:11:19.500592   93835 logs.go:284] 0 containers: []
	W0717 16:11:19.500605   93835 logs.go:286] No container was found matching "kindnet"
	I0717 16:11:19.500672   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 16:11:19.520704   93835 logs.go:284] 0 containers: []
	W0717 16:11:19.520717   93835 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 16:11:19.520725   93835 logs.go:123] Gathering logs for dmesg ...
	I0717 16:11:19.520732   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 16:11:19.534116   93835 logs.go:123] Gathering logs for describe nodes ...
	I0717 16:11:19.534133   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 16:11:19.594934   93835 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 16:11:19.594948   93835 logs.go:123] Gathering logs for Docker ...
	I0717 16:11:19.594956   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 16:11:19.611757   93835 logs.go:123] Gathering logs for container status ...
	I0717 16:11:19.611770   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 16:11:19.666218   93835 logs.go:123] Gathering logs for kubelet ...
	I0717 16:11:19.666233   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 16:11:22.686593   94398 pod_ready.go:102] pod "coredns-5d78c9869d-t79mm" in "kube-system" namespace has status "Ready":"False"
	I0717 16:11:25.185774   94398 pod_ready.go:102] pod "coredns-5d78c9869d-t79mm" in "kube-system" namespace has status "Ready":"False"
	I0717 16:11:22.213692   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:11:22.226048   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 16:11:22.245727   93835 logs.go:284] 0 containers: []
	W0717 16:11:22.245742   93835 logs.go:286] No container was found matching "kube-apiserver"
	I0717 16:11:22.245811   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 16:11:22.266157   93835 logs.go:284] 0 containers: []
	W0717 16:11:22.266172   93835 logs.go:286] No container was found matching "etcd"
	I0717 16:11:22.266239   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 16:11:22.284979   93835 logs.go:284] 0 containers: []
	W0717 16:11:22.284995   93835 logs.go:286] No container was found matching "coredns"
	I0717 16:11:22.285068   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 16:11:22.306538   93835 logs.go:284] 0 containers: []
	W0717 16:11:22.306555   93835 logs.go:286] No container was found matching "kube-scheduler"
	I0717 16:11:22.306628   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 16:11:22.325484   93835 logs.go:284] 0 containers: []
	W0717 16:11:22.325498   93835 logs.go:286] No container was found matching "kube-proxy"
	I0717 16:11:22.325568   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 16:11:22.345799   93835 logs.go:284] 0 containers: []
	W0717 16:11:22.345813   93835 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 16:11:22.345883   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 16:11:22.365977   93835 logs.go:284] 0 containers: []
	W0717 16:11:22.365991   93835 logs.go:286] No container was found matching "kindnet"
	I0717 16:11:22.366060   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 16:11:22.397379   93835 logs.go:284] 0 containers: []
	W0717 16:11:22.397394   93835 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 16:11:22.397401   93835 logs.go:123] Gathering logs for container status ...
	I0717 16:11:22.397408   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 16:11:22.450063   93835 logs.go:123] Gathering logs for kubelet ...
	I0717 16:11:22.450077   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 16:11:22.490827   93835 logs.go:123] Gathering logs for dmesg ...
	I0717 16:11:22.490840   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 16:11:22.504655   93835 logs.go:123] Gathering logs for describe nodes ...
	I0717 16:11:22.504669   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 16:11:22.562404   93835 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 16:11:22.562415   93835 logs.go:123] Gathering logs for Docker ...
	I0717 16:11:22.562425   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 16:11:25.079939   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:11:25.092345   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 16:11:25.111113   93835 logs.go:284] 0 containers: []
	W0717 16:11:25.111126   93835 logs.go:286] No container was found matching "kube-apiserver"
	I0717 16:11:25.111203   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 16:11:25.129832   93835 logs.go:284] 0 containers: []
	W0717 16:11:25.129844   93835 logs.go:286] No container was found matching "etcd"
	I0717 16:11:25.129911   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 16:11:25.149703   93835 logs.go:284] 0 containers: []
	W0717 16:11:25.149717   93835 logs.go:286] No container was found matching "coredns"
	I0717 16:11:25.149789   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 16:11:25.169578   93835 logs.go:284] 0 containers: []
	W0717 16:11:25.169590   93835 logs.go:286] No container was found matching "kube-scheduler"
	I0717 16:11:25.169650   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 16:11:25.189904   93835 logs.go:284] 0 containers: []
	W0717 16:11:25.189915   93835 logs.go:286] No container was found matching "kube-proxy"
	I0717 16:11:25.189982   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 16:11:25.209990   93835 logs.go:284] 0 containers: []
	W0717 16:11:25.210003   93835 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 16:11:25.210072   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 16:11:25.229691   93835 logs.go:284] 0 containers: []
	W0717 16:11:25.229704   93835 logs.go:286] No container was found matching "kindnet"
	I0717 16:11:25.229767   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 16:11:25.250261   93835 logs.go:284] 0 containers: []
	W0717 16:11:25.250274   93835 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 16:11:25.250282   93835 logs.go:123] Gathering logs for kubelet ...
	I0717 16:11:25.250293   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 16:11:25.293962   93835 logs.go:123] Gathering logs for dmesg ...
	I0717 16:11:25.293981   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 16:11:25.310454   93835 logs.go:123] Gathering logs for describe nodes ...
	I0717 16:11:25.310470   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 16:11:25.371428   93835 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 16:11:25.371440   93835 logs.go:123] Gathering logs for Docker ...
	I0717 16:11:25.371447   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 16:11:25.387484   93835 logs.go:123] Gathering logs for container status ...
	I0717 16:11:25.387497   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 16:11:27.187098   94398 pod_ready.go:102] pod "coredns-5d78c9869d-t79mm" in "kube-system" namespace has status "Ready":"False"
	I0717 16:11:29.687087   94398 pod_ready.go:102] pod "coredns-5d78c9869d-t79mm" in "kube-system" namespace has status "Ready":"False"
	I0717 16:11:27.939780   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:11:27.952060   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 16:11:27.971854   93835 logs.go:284] 0 containers: []
	W0717 16:11:27.971867   93835 logs.go:286] No container was found matching "kube-apiserver"
	I0717 16:11:27.971939   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 16:11:27.992234   93835 logs.go:284] 0 containers: []
	W0717 16:11:27.992248   93835 logs.go:286] No container was found matching "etcd"
	I0717 16:11:27.992317   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 16:11:28.011020   93835 logs.go:284] 0 containers: []
	W0717 16:11:28.011032   93835 logs.go:286] No container was found matching "coredns"
	I0717 16:11:28.011099   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 16:11:28.032885   93835 logs.go:284] 0 containers: []
	W0717 16:11:28.032897   93835 logs.go:286] No container was found matching "kube-scheduler"
	I0717 16:11:28.032965   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 16:11:28.053628   93835 logs.go:284] 0 containers: []
	W0717 16:11:28.053643   93835 logs.go:286] No container was found matching "kube-proxy"
	I0717 16:11:28.053714   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 16:11:28.072098   93835 logs.go:284] 0 containers: []
	W0717 16:11:28.072111   93835 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 16:11:28.072186   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 16:11:28.091790   93835 logs.go:284] 0 containers: []
	W0717 16:11:28.091804   93835 logs.go:286] No container was found matching "kindnet"
	I0717 16:11:28.091872   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 16:11:28.110699   93835 logs.go:284] 0 containers: []
	W0717 16:11:28.110712   93835 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 16:11:28.110720   93835 logs.go:123] Gathering logs for kubelet ...
	I0717 16:11:28.110727   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 16:11:28.151524   93835 logs.go:123] Gathering logs for dmesg ...
	I0717 16:11:28.151540   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 16:11:28.166240   93835 logs.go:123] Gathering logs for describe nodes ...
	I0717 16:11:28.166255   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 16:11:28.224810   93835 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 16:11:28.224823   93835 logs.go:123] Gathering logs for Docker ...
	I0717 16:11:28.224830   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 16:11:28.241412   93835 logs.go:123] Gathering logs for container status ...
	I0717 16:11:28.241426   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 16:11:31.688212   94398 pod_ready.go:102] pod "coredns-5d78c9869d-t79mm" in "kube-system" namespace has status "Ready":"False"
	I0717 16:11:34.186856   94398 pod_ready.go:102] pod "coredns-5d78c9869d-t79mm" in "kube-system" namespace has status "Ready":"False"
	I0717 16:11:30.800962   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:11:30.813385   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 16:11:30.833309   93835 logs.go:284] 0 containers: []
	W0717 16:11:30.833323   93835 logs.go:286] No container was found matching "kube-apiserver"
	I0717 16:11:30.833390   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 16:11:30.852282   93835 logs.go:284] 0 containers: []
	W0717 16:11:30.852297   93835 logs.go:286] No container was found matching "etcd"
	I0717 16:11:30.852365   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 16:11:30.870815   93835 logs.go:284] 0 containers: []
	W0717 16:11:30.870827   93835 logs.go:286] No container was found matching "coredns"
	I0717 16:11:30.870893   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 16:11:30.889400   93835 logs.go:284] 0 containers: []
	W0717 16:11:30.889414   93835 logs.go:286] No container was found matching "kube-scheduler"
	I0717 16:11:30.889481   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 16:11:30.909299   93835 logs.go:284] 0 containers: []
	W0717 16:11:30.909328   93835 logs.go:286] No container was found matching "kube-proxy"
	I0717 16:11:30.909393   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 16:11:30.928719   93835 logs.go:284] 0 containers: []
	W0717 16:11:30.928733   93835 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 16:11:30.928802   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 16:11:30.949012   93835 logs.go:284] 0 containers: []
	W0717 16:11:30.949025   93835 logs.go:286] No container was found matching "kindnet"
	I0717 16:11:30.949093   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 16:11:30.967761   93835 logs.go:284] 0 containers: []
	W0717 16:11:30.967774   93835 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 16:11:30.967780   93835 logs.go:123] Gathering logs for Docker ...
	I0717 16:11:30.967787   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 16:11:30.984715   93835 logs.go:123] Gathering logs for container status ...
	I0717 16:11:30.984729   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 16:11:31.035522   93835 logs.go:123] Gathering logs for kubelet ...
	I0717 16:11:31.035538   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 16:11:31.078460   93835 logs.go:123] Gathering logs for dmesg ...
	I0717 16:11:31.078474   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 16:11:31.092562   93835 logs.go:123] Gathering logs for describe nodes ...
	I0717 16:11:31.092597   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 16:11:31.149881   93835 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 16:11:33.651898   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:11:33.664361   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 16:11:33.684241   93835 logs.go:284] 0 containers: []
	W0717 16:11:33.684259   93835 logs.go:286] No container was found matching "kube-apiserver"
	I0717 16:11:33.684330   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 16:11:33.703735   93835 logs.go:284] 0 containers: []
	W0717 16:11:33.703748   93835 logs.go:286] No container was found matching "etcd"
	I0717 16:11:33.703819   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 16:11:33.724109   93835 logs.go:284] 0 containers: []
	W0717 16:11:33.724123   93835 logs.go:286] No container was found matching "coredns"
	I0717 16:11:33.724189   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 16:11:33.743630   93835 logs.go:284] 0 containers: []
	W0717 16:11:33.743645   93835 logs.go:286] No container was found matching "kube-scheduler"
	I0717 16:11:33.743714   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 16:11:33.763041   93835 logs.go:284] 0 containers: []
	W0717 16:11:33.763054   93835 logs.go:286] No container was found matching "kube-proxy"
	I0717 16:11:33.763123   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 16:11:33.781213   93835 logs.go:284] 0 containers: []
	W0717 16:11:33.781227   93835 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 16:11:33.781304   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 16:11:33.800470   93835 logs.go:284] 0 containers: []
	W0717 16:11:33.800483   93835 logs.go:286] No container was found matching "kindnet"
	I0717 16:11:33.800549   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 16:11:33.820674   93835 logs.go:284] 0 containers: []
	W0717 16:11:33.820689   93835 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 16:11:33.820695   93835 logs.go:123] Gathering logs for kubelet ...
	I0717 16:11:33.820704   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 16:11:33.862316   93835 logs.go:123] Gathering logs for dmesg ...
	I0717 16:11:33.862331   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 16:11:33.876505   93835 logs.go:123] Gathering logs for describe nodes ...
	I0717 16:11:33.876520   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 16:11:33.934552   93835 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 16:11:33.934564   93835 logs.go:123] Gathering logs for Docker ...
	I0717 16:11:33.934570   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 16:11:33.950959   93835 logs.go:123] Gathering logs for container status ...
	I0717 16:11:33.950971   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 16:11:36.187787   94398 pod_ready.go:102] pod "coredns-5d78c9869d-t79mm" in "kube-system" namespace has status "Ready":"False"
	I0717 16:11:38.188123   94398 pod_ready.go:102] pod "coredns-5d78c9869d-t79mm" in "kube-system" namespace has status "Ready":"False"
	I0717 16:11:36.503291   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:11:36.514325   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 16:11:36.535202   93835 logs.go:284] 0 containers: []
	W0717 16:11:36.535215   93835 logs.go:286] No container was found matching "kube-apiserver"
	I0717 16:11:36.535286   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 16:11:36.556627   93835 logs.go:284] 0 containers: []
	W0717 16:11:36.556641   93835 logs.go:286] No container was found matching "etcd"
	I0717 16:11:36.556714   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 16:11:36.599015   93835 logs.go:284] 0 containers: []
	W0717 16:11:36.599028   93835 logs.go:286] No container was found matching "coredns"
	I0717 16:11:36.599096   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 16:11:36.619003   93835 logs.go:284] 0 containers: []
	W0717 16:11:36.619017   93835 logs.go:286] No container was found matching "kube-scheduler"
	I0717 16:11:36.619092   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 16:11:36.638681   93835 logs.go:284] 0 containers: []
	W0717 16:11:36.638694   93835 logs.go:286] No container was found matching "kube-proxy"
	I0717 16:11:36.638763   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 16:11:36.658586   93835 logs.go:284] 0 containers: []
	W0717 16:11:36.658598   93835 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 16:11:36.658664   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 16:11:36.677914   93835 logs.go:284] 0 containers: []
	W0717 16:11:36.677926   93835 logs.go:286] No container was found matching "kindnet"
	I0717 16:11:36.677992   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 16:11:36.697802   93835 logs.go:284] 0 containers: []
	W0717 16:11:36.697816   93835 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 16:11:36.697823   93835 logs.go:123] Gathering logs for container status ...
	I0717 16:11:36.697833   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 16:11:36.749848   93835 logs.go:123] Gathering logs for kubelet ...
	I0717 16:11:36.749863   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 16:11:36.789680   93835 logs.go:123] Gathering logs for dmesg ...
	I0717 16:11:36.789694   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 16:11:36.804203   93835 logs.go:123] Gathering logs for describe nodes ...
	I0717 16:11:36.804217   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 16:11:36.861013   93835 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 16:11:36.861024   93835 logs.go:123] Gathering logs for Docker ...
	I0717 16:11:36.861031   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 16:11:39.378318   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:11:39.390982   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 16:11:39.410146   93835 logs.go:284] 0 containers: []
	W0717 16:11:39.410159   93835 logs.go:286] No container was found matching "kube-apiserver"
	I0717 16:11:39.410226   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 16:11:39.429627   93835 logs.go:284] 0 containers: []
	W0717 16:11:39.429640   93835 logs.go:286] No container was found matching "etcd"
	I0717 16:11:39.429708   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 16:11:39.448992   93835 logs.go:284] 0 containers: []
	W0717 16:11:39.449005   93835 logs.go:286] No container was found matching "coredns"
	I0717 16:11:39.449074   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 16:11:39.467752   93835 logs.go:284] 0 containers: []
	W0717 16:11:39.467765   93835 logs.go:286] No container was found matching "kube-scheduler"
	I0717 16:11:39.467832   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 16:11:39.487931   93835 logs.go:284] 0 containers: []
	W0717 16:11:39.487944   93835 logs.go:286] No container was found matching "kube-proxy"
	I0717 16:11:39.488010   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 16:11:39.509205   93835 logs.go:284] 0 containers: []
	W0717 16:11:39.509220   93835 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 16:11:39.509290   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 16:11:39.530820   93835 logs.go:284] 0 containers: []
	W0717 16:11:39.530835   93835 logs.go:286] No container was found matching "kindnet"
	I0717 16:11:39.530909   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 16:11:39.551623   93835 logs.go:284] 0 containers: []
	W0717 16:11:39.551641   93835 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 16:11:39.551648   93835 logs.go:123] Gathering logs for kubelet ...
	I0717 16:11:39.551659   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 16:11:39.617986   93835 logs.go:123] Gathering logs for dmesg ...
	I0717 16:11:39.618006   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 16:11:39.632860   93835 logs.go:123] Gathering logs for describe nodes ...
	I0717 16:11:39.632894   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 16:11:39.692796   93835 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 16:11:39.692808   93835 logs.go:123] Gathering logs for Docker ...
	I0717 16:11:39.692815   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 16:11:39.709142   93835 logs.go:123] Gathering logs for container status ...
	I0717 16:11:39.709155   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0717 16:11:42.264100   93835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:11:42.276461   93835 kubeadm.go:640] restartCluster took 4m13.111495627s
	W0717 16:11:42.276502   93835 out.go:239] ! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	I0717 16:11:42.276522   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0717 16:11:42.690572   93835 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 16:11:42.701697   93835 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 16:11:42.710816   93835 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0717 16:11:42.710870   93835 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 16:11:42.719735   93835 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 16:11:42.719764   93835 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0717 16:11:42.769533   93835 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0717 16:11:42.769653   93835 kubeadm.go:322] [preflight] Running pre-flight checks
	I0717 16:11:43.030582   93835 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 16:11:43.030745   93835 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 16:11:43.030836   93835 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 16:11:43.210961   93835 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 16:11:43.211663   93835 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 16:11:43.218510   93835 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0717 16:11:43.288047   93835 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 16:11:43.309713   93835 out.go:204]   - Generating certificates and keys ...
	I0717 16:11:43.309791   93835 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0717 16:11:43.309860   93835 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0717 16:11:43.309929   93835 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0717 16:11:43.309982   93835 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0717 16:11:43.310059   93835 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0717 16:11:43.310123   93835 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0717 16:11:43.310232   93835 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0717 16:11:43.310287   93835 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0717 16:11:43.310343   93835 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0717 16:11:43.310482   93835 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0717 16:11:43.310527   93835 kubeadm.go:322] [certs] Using the existing "sa" key
	I0717 16:11:43.310596   93835 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 16:11:43.502662   93835 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 16:11:43.694360   93835 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 16:11:43.844413   93835 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 16:11:43.952113   93835 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 16:11:43.952913   93835 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 16:11:40.688978   94398 pod_ready.go:102] pod "coredns-5d78c9869d-t79mm" in "kube-system" namespace has status "Ready":"False"
	I0717 16:11:43.185712   94398 pod_ready.go:102] pod "coredns-5d78c9869d-t79mm" in "kube-system" namespace has status "Ready":"False"
	I0717 16:11:45.186554   94398 pod_ready.go:102] pod "coredns-5d78c9869d-t79mm" in "kube-system" namespace has status "Ready":"False"
	I0717 16:11:43.974845   93835 out.go:204]   - Booting up control plane ...
	I0717 16:11:43.975085   93835 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 16:11:43.975246   93835 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 16:11:43.975363   93835 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 16:11:43.975502   93835 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 16:11:43.975754   93835 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0717 16:11:46.685826   94398 pod_ready.go:92] pod "coredns-5d78c9869d-t79mm" in "kube-system" namespace has status "Ready":"True"
	I0717 16:11:46.685839   94398 pod_ready.go:81] duration metric: took 35.01240693s waiting for pod "coredns-5d78c9869d-t79mm" in "kube-system" namespace to be "Ready" ...
	I0717 16:11:46.685846   94398 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-306000" in "kube-system" namespace to be "Ready" ...
	I0717 16:11:46.690601   94398 pod_ready.go:92] pod "etcd-embed-certs-306000" in "kube-system" namespace has status "Ready":"True"
	I0717 16:11:46.690611   94398 pod_ready.go:81] duration metric: took 4.75392ms waiting for pod "etcd-embed-certs-306000" in "kube-system" namespace to be "Ready" ...
	I0717 16:11:46.690618   94398 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-306000" in "kube-system" namespace to be "Ready" ...
	I0717 16:11:46.695575   94398 pod_ready.go:92] pod "kube-apiserver-embed-certs-306000" in "kube-system" namespace has status "Ready":"True"
	I0717 16:11:46.695584   94398 pod_ready.go:81] duration metric: took 4.961015ms waiting for pod "kube-apiserver-embed-certs-306000" in "kube-system" namespace to be "Ready" ...
	I0717 16:11:46.695590   94398 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-306000" in "kube-system" namespace to be "Ready" ...
	I0717 16:11:46.700564   94398 pod_ready.go:92] pod "kube-controller-manager-embed-certs-306000" in "kube-system" namespace has status "Ready":"True"
	I0717 16:11:46.700573   94398 pod_ready.go:81] duration metric: took 4.9778ms waiting for pod "kube-controller-manager-embed-certs-306000" in "kube-system" namespace to be "Ready" ...
	I0717 16:11:46.700579   94398 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-6vxcp" in "kube-system" namespace to be "Ready" ...
	I0717 16:11:46.705431   94398 pod_ready.go:92] pod "kube-proxy-6vxcp" in "kube-system" namespace has status "Ready":"True"
	I0717 16:11:46.705441   94398 pod_ready.go:81] duration metric: took 4.856982ms waiting for pod "kube-proxy-6vxcp" in "kube-system" namespace to be "Ready" ...
	I0717 16:11:46.705447   94398 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-306000" in "kube-system" namespace to be "Ready" ...
	I0717 16:11:47.084601   94398 pod_ready.go:92] pod "kube-scheduler-embed-certs-306000" in "kube-system" namespace has status "Ready":"True"
	I0717 16:11:47.084612   94398 pod_ready.go:81] duration metric: took 379.156107ms waiting for pod "kube-scheduler-embed-certs-306000" in "kube-system" namespace to be "Ready" ...
	I0717 16:11:47.084619   94398 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-74d5c6b9c-t6m6d" in "kube-system" namespace to be "Ready" ...
	I0717 16:11:49.492921   94398 pod_ready.go:102] pod "metrics-server-74d5c6b9c-t6m6d" in "kube-system" namespace has status "Ready":"False"
	I0717 16:11:51.494769   94398 pod_ready.go:102] pod "metrics-server-74d5c6b9c-t6m6d" in "kube-system" namespace has status "Ready":"False"
	I0717 16:11:53.991561   94398 pod_ready.go:102] pod "metrics-server-74d5c6b9c-t6m6d" in "kube-system" namespace has status "Ready":"False"
	I0717 16:11:56.493632   94398 pod_ready.go:102] pod "metrics-server-74d5c6b9c-t6m6d" in "kube-system" namespace has status "Ready":"False"
	I0717 16:11:58.991324   94398 pod_ready.go:102] pod "metrics-server-74d5c6b9c-t6m6d" in "kube-system" namespace has status "Ready":"False"
	I0717 16:12:00.992154   94398 pod_ready.go:102] pod "metrics-server-74d5c6b9c-t6m6d" in "kube-system" namespace has status "Ready":"False"
	I0717 16:12:03.495233   94398 pod_ready.go:102] pod "metrics-server-74d5c6b9c-t6m6d" in "kube-system" namespace has status "Ready":"False"
	I0717 16:12:05.993493   94398 pod_ready.go:102] pod "metrics-server-74d5c6b9c-t6m6d" in "kube-system" namespace has status "Ready":"False"
	I0717 16:12:08.496496   94398 pod_ready.go:102] pod "metrics-server-74d5c6b9c-t6m6d" in "kube-system" namespace has status "Ready":"False"
	I0717 16:12:10.993674   94398 pod_ready.go:102] pod "metrics-server-74d5c6b9c-t6m6d" in "kube-system" namespace has status "Ready":"False"
	I0717 16:12:13.494033   94398 pod_ready.go:102] pod "metrics-server-74d5c6b9c-t6m6d" in "kube-system" namespace has status "Ready":"False"
	I0717 16:12:15.495951   94398 pod_ready.go:102] pod "metrics-server-74d5c6b9c-t6m6d" in "kube-system" namespace has status "Ready":"False"
	I0717 16:12:17.992661   94398 pod_ready.go:102] pod "metrics-server-74d5c6b9c-t6m6d" in "kube-system" namespace has status "Ready":"False"
	I0717 16:12:20.493389   94398 pod_ready.go:102] pod "metrics-server-74d5c6b9c-t6m6d" in "kube-system" namespace has status "Ready":"False"
	I0717 16:12:22.991936   94398 pod_ready.go:102] pod "metrics-server-74d5c6b9c-t6m6d" in "kube-system" namespace has status "Ready":"False"
	I0717 16:12:23.963686   93835 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0717 16:12:23.964879   93835 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 16:12:23.965093   93835 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 16:12:25.492940   94398 pod_ready.go:102] pod "metrics-server-74d5c6b9c-t6m6d" in "kube-system" namespace has status "Ready":"False"
	I0717 16:12:27.495382   94398 pod_ready.go:102] pod "metrics-server-74d5c6b9c-t6m6d" in "kube-system" namespace has status "Ready":"False"
	I0717 16:12:29.495635   94398 pod_ready.go:102] pod "metrics-server-74d5c6b9c-t6m6d" in "kube-system" namespace has status "Ready":"False"
	I0717 16:12:28.966848   93835 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 16:12:28.967069   93835 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 16:12:31.991106   94398 pod_ready.go:102] pod "metrics-server-74d5c6b9c-t6m6d" in "kube-system" namespace has status "Ready":"False"
	I0717 16:12:33.992434   94398 pod_ready.go:102] pod "metrics-server-74d5c6b9c-t6m6d" in "kube-system" namespace has status "Ready":"False"
	I0717 16:12:36.493565   94398 pod_ready.go:102] pod "metrics-server-74d5c6b9c-t6m6d" in "kube-system" namespace has status "Ready":"False"
	I0717 16:12:38.494163   94398 pod_ready.go:102] pod "metrics-server-74d5c6b9c-t6m6d" in "kube-system" namespace has status "Ready":"False"
	I0717 16:12:38.968951   93835 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 16:12:38.969217   93835 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 16:12:40.992597   94398 pod_ready.go:102] pod "metrics-server-74d5c6b9c-t6m6d" in "kube-system" namespace has status "Ready":"False"
	I0717 16:12:43.492782   94398 pod_ready.go:102] pod "metrics-server-74d5c6b9c-t6m6d" in "kube-system" namespace has status "Ready":"False"
	I0717 16:12:45.494651   94398 pod_ready.go:102] pod "metrics-server-74d5c6b9c-t6m6d" in "kube-system" namespace has status "Ready":"False"
	I0717 16:12:47.991870   94398 pod_ready.go:102] pod "metrics-server-74d5c6b9c-t6m6d" in "kube-system" namespace has status "Ready":"False"
	I0717 16:12:50.493778   94398 pod_ready.go:102] pod "metrics-server-74d5c6b9c-t6m6d" in "kube-system" namespace has status "Ready":"False"
	I0717 16:12:52.992861   94398 pod_ready.go:102] pod "metrics-server-74d5c6b9c-t6m6d" in "kube-system" namespace has status "Ready":"False"
	I0717 16:12:54.993076   94398 pod_ready.go:102] pod "metrics-server-74d5c6b9c-t6m6d" in "kube-system" namespace has status "Ready":"False"
	I0717 16:12:57.492994   94398 pod_ready.go:102] pod "metrics-server-74d5c6b9c-t6m6d" in "kube-system" namespace has status "Ready":"False"
	I0717 16:12:59.494882   94398 pod_ready.go:102] pod "metrics-server-74d5c6b9c-t6m6d" in "kube-system" namespace has status "Ready":"False"
	I0717 16:12:58.971415   93835 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 16:12:58.971613   93835 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 16:13:01.993787   94398 pod_ready.go:102] pod "metrics-server-74d5c6b9c-t6m6d" in "kube-system" namespace has status "Ready":"False"
	I0717 16:13:04.493910   94398 pod_ready.go:102] pod "metrics-server-74d5c6b9c-t6m6d" in "kube-system" namespace has status "Ready":"False"
	I0717 16:13:06.495050   94398 pod_ready.go:102] pod "metrics-server-74d5c6b9c-t6m6d" in "kube-system" namespace has status "Ready":"False"
	I0717 16:13:08.992765   94398 pod_ready.go:102] pod "metrics-server-74d5c6b9c-t6m6d" in "kube-system" namespace has status "Ready":"False"
	I0717 16:13:10.992819   94398 pod_ready.go:102] pod "metrics-server-74d5c6b9c-t6m6d" in "kube-system" namespace has status "Ready":"False"
	I0717 16:13:13.493294   94398 pod_ready.go:102] pod "metrics-server-74d5c6b9c-t6m6d" in "kube-system" namespace has status "Ready":"False"
	I0717 16:13:15.494411   94398 pod_ready.go:102] pod "metrics-server-74d5c6b9c-t6m6d" in "kube-system" namespace has status "Ready":"False"
	I0717 16:13:17.992540   94398 pod_ready.go:102] pod "metrics-server-74d5c6b9c-t6m6d" in "kube-system" namespace has status "Ready":"False"
	I0717 16:13:20.495036   94398 pod_ready.go:102] pod "metrics-server-74d5c6b9c-t6m6d" in "kube-system" namespace has status "Ready":"False"
	I0717 16:13:22.992012   94398 pod_ready.go:102] pod "metrics-server-74d5c6b9c-t6m6d" in "kube-system" namespace has status "Ready":"False"
	I0717 16:13:24.992351   94398 pod_ready.go:102] pod "metrics-server-74d5c6b9c-t6m6d" in "kube-system" namespace has status "Ready":"False"
	I0717 16:13:26.993240   94398 pod_ready.go:102] pod "metrics-server-74d5c6b9c-t6m6d" in "kube-system" namespace has status "Ready":"False"
	I0717 16:13:29.495720   94398 pod_ready.go:102] pod "metrics-server-74d5c6b9c-t6m6d" in "kube-system" namespace has status "Ready":"False"
	I0717 16:13:31.992510   94398 pod_ready.go:102] pod "metrics-server-74d5c6b9c-t6m6d" in "kube-system" namespace has status "Ready":"False"
	I0717 16:13:34.493569   94398 pod_ready.go:102] pod "metrics-server-74d5c6b9c-t6m6d" in "kube-system" namespace has status "Ready":"False"
	I0717 16:13:38.974169   93835 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 16:13:38.974394   93835 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 16:13:38.974409   93835 kubeadm.go:322] 
	I0717 16:13:38.974446   93835 kubeadm.go:322] Unfortunately, an error has occurred:
	I0717 16:13:38.974524   93835 kubeadm.go:322] 	timed out waiting for the condition
	I0717 16:13:38.974533   93835 kubeadm.go:322] 
	I0717 16:13:38.974597   93835 kubeadm.go:322] This error is likely caused by:
	I0717 16:13:38.974636   93835 kubeadm.go:322] 	- The kubelet is not running
	I0717 16:13:38.974762   93835 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0717 16:13:38.974776   93835 kubeadm.go:322] 
	I0717 16:13:38.974939   93835 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0717 16:13:38.974980   93835 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0717 16:13:38.975024   93835 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0717 16:13:38.975029   93835 kubeadm.go:322] 
	I0717 16:13:38.975152   93835 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0717 16:13:38.975267   93835 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0717 16:13:38.975370   93835 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0717 16:13:38.975435   93835 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0717 16:13:38.975498   93835 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0717 16:13:38.975527   93835 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0717 16:13:38.976853   93835 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0717 16:13:38.976914   93835 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0717 16:13:38.977018   93835 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.4. Latest validated version: 18.09
	I0717 16:13:38.977096   93835 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 16:13:38.977165   93835 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0717 16:13:38.977221   93835 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	W0717 16:13:38.977296   93835 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.4. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0717 16:13:38.977331   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0717 16:13:39.389138   93835 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 16:13:39.400445   93835 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0717 16:13:39.400507   93835 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 16:13:39.409262   93835 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 16:13:39.409282   93835 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0717 16:13:39.459947   93835 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I0717 16:13:39.459994   93835 kubeadm.go:322] [preflight] Running pre-flight checks
	I0717 16:13:39.731126   93835 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 16:13:39.731279   93835 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 16:13:39.731447   93835 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 16:13:39.913619   93835 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 16:13:39.914356   93835 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 16:13:39.921205   93835 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I0717 16:13:39.998119   93835 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 16:13:36.495410   94398 pod_ready.go:102] pod "metrics-server-74d5c6b9c-t6m6d" in "kube-system" namespace has status "Ready":"False"
	I0717 16:13:38.992845   94398 pod_ready.go:102] pod "metrics-server-74d5c6b9c-t6m6d" in "kube-system" namespace has status "Ready":"False"
	I0717 16:13:40.019725   93835 out.go:204]   - Generating certificates and keys ...
	I0717 16:13:40.019790   93835 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0717 16:13:40.019869   93835 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0717 16:13:40.019960   93835 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0717 16:13:40.020011   93835 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0717 16:13:40.020077   93835 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0717 16:13:40.020122   93835 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0717 16:13:40.020178   93835 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0717 16:13:40.020250   93835 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0717 16:13:40.020341   93835 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0717 16:13:40.020435   93835 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0717 16:13:40.020476   93835 kubeadm.go:322] [certs] Using the existing "sa" key
	I0717 16:13:40.020520   93835 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 16:13:40.095710   93835 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 16:13:40.210159   93835 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 16:13:40.415683   93835 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 16:13:40.529613   93835 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 16:13:40.530275   93835 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 16:13:41.495762   94398 pod_ready.go:102] pod "metrics-server-74d5c6b9c-t6m6d" in "kube-system" namespace has status "Ready":"False"
	I0717 16:13:43.495871   94398 pod_ready.go:102] pod "metrics-server-74d5c6b9c-t6m6d" in "kube-system" namespace has status "Ready":"False"
	I0717 16:13:40.551861   93835 out.go:204]   - Booting up control plane ...
	I0717 16:13:40.552029   93835 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 16:13:40.552248   93835 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 16:13:40.552358   93835 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 16:13:40.552516   93835 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 16:13:40.552838   93835 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0717 16:13:45.992490   94398 pod_ready.go:102] pod "metrics-server-74d5c6b9c-t6m6d" in "kube-system" namespace has status "Ready":"False"
	I0717 16:13:47.993907   94398 pod_ready.go:102] pod "metrics-server-74d5c6b9c-t6m6d" in "kube-system" namespace has status "Ready":"False"
	I0717 16:13:50.494225   94398 pod_ready.go:102] pod "metrics-server-74d5c6b9c-t6m6d" in "kube-system" namespace has status "Ready":"False"
	I0717 16:13:52.993526   94398 pod_ready.go:102] pod "metrics-server-74d5c6b9c-t6m6d" in "kube-system" namespace has status "Ready":"False"
	I0717 16:13:55.495900   94398 pod_ready.go:102] pod "metrics-server-74d5c6b9c-t6m6d" in "kube-system" namespace has status "Ready":"False"
	I0717 16:13:57.992545   94398 pod_ready.go:102] pod "metrics-server-74d5c6b9c-t6m6d" in "kube-system" namespace has status "Ready":"False"
	I0717 16:13:59.992647   94398 pod_ready.go:102] pod "metrics-server-74d5c6b9c-t6m6d" in "kube-system" namespace has status "Ready":"False"
	I0717 16:14:01.992714   94398 pod_ready.go:102] pod "metrics-server-74d5c6b9c-t6m6d" in "kube-system" namespace has status "Ready":"False"
	I0717 16:14:03.993502   94398 pod_ready.go:102] pod "metrics-server-74d5c6b9c-t6m6d" in "kube-system" namespace has status "Ready":"False"
	I0717 16:14:06.493765   94398 pod_ready.go:102] pod "metrics-server-74d5c6b9c-t6m6d" in "kube-system" namespace has status "Ready":"False"
	I0717 16:14:08.496523   94398 pod_ready.go:102] pod "metrics-server-74d5c6b9c-t6m6d" in "kube-system" namespace has status "Ready":"False"
	I0717 16:14:10.992922   94398 pod_ready.go:102] pod "metrics-server-74d5c6b9c-t6m6d" in "kube-system" namespace has status "Ready":"False"
	I0717 16:14:12.993099   94398 pod_ready.go:102] pod "metrics-server-74d5c6b9c-t6m6d" in "kube-system" namespace has status "Ready":"False"
	I0717 16:14:14.993377   94398 pod_ready.go:102] pod "metrics-server-74d5c6b9c-t6m6d" in "kube-system" namespace has status "Ready":"False"
	I0717 16:14:16.993491   94398 pod_ready.go:102] pod "metrics-server-74d5c6b9c-t6m6d" in "kube-system" namespace has status "Ready":"False"
	I0717 16:14:19.493609   94398 pod_ready.go:102] pod "metrics-server-74d5c6b9c-t6m6d" in "kube-system" namespace has status "Ready":"False"
	I0717 16:14:21.994801   94398 pod_ready.go:102] pod "metrics-server-74d5c6b9c-t6m6d" in "kube-system" namespace has status "Ready":"False"
	I0717 16:14:24.494684   94398 pod_ready.go:102] pod "metrics-server-74d5c6b9c-t6m6d" in "kube-system" namespace has status "Ready":"False"
	I0717 16:14:20.540527   93835 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0717 16:14:20.541233   93835 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 16:14:20.541448   93835 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 16:14:26.497359   94398 pod_ready.go:102] pod "metrics-server-74d5c6b9c-t6m6d" in "kube-system" namespace has status "Ready":"False"
	I0717 16:14:28.993753   94398 pod_ready.go:102] pod "metrics-server-74d5c6b9c-t6m6d" in "kube-system" namespace has status "Ready":"False"
	I0717 16:14:25.543077   93835 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 16:14:25.543306   93835 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 16:14:31.494631   94398 pod_ready.go:102] pod "metrics-server-74d5c6b9c-t6m6d" in "kube-system" namespace has status "Ready":"False"
	I0717 16:14:33.494845   94398 pod_ready.go:102] pod "metrics-server-74d5c6b9c-t6m6d" in "kube-system" namespace has status "Ready":"False"
	I0717 16:14:35.992983   94398 pod_ready.go:102] pod "metrics-server-74d5c6b9c-t6m6d" in "kube-system" namespace has status "Ready":"False"
	I0717 16:14:37.993682   94398 pod_ready.go:102] pod "metrics-server-74d5c6b9c-t6m6d" in "kube-system" namespace has status "Ready":"False"
	I0717 16:14:39.994102   94398 pod_ready.go:102] pod "metrics-server-74d5c6b9c-t6m6d" in "kube-system" namespace has status "Ready":"False"
	I0717 16:14:35.544093   93835 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 16:14:35.544297   93835 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 16:14:42.495407   94398 pod_ready.go:102] pod "metrics-server-74d5c6b9c-t6m6d" in "kube-system" namespace has status "Ready":"False"
	I0717 16:14:44.495452   94398 pod_ready.go:102] pod "metrics-server-74d5c6b9c-t6m6d" in "kube-system" namespace has status "Ready":"False"
	I0717 16:14:46.993883   94398 pod_ready.go:102] pod "metrics-server-74d5c6b9c-t6m6d" in "kube-system" namespace has status "Ready":"False"
	I0717 16:14:49.495360   94398 pod_ready.go:102] pod "metrics-server-74d5c6b9c-t6m6d" in "kube-system" namespace has status "Ready":"False"
	I0717 16:14:51.993282   94398 pod_ready.go:102] pod "metrics-server-74d5c6b9c-t6m6d" in "kube-system" namespace has status "Ready":"False"
	I0717 16:14:54.496959   94398 pod_ready.go:102] pod "metrics-server-74d5c6b9c-t6m6d" in "kube-system" namespace has status "Ready":"False"
	I0717 16:14:56.993768   94398 pod_ready.go:102] pod "metrics-server-74d5c6b9c-t6m6d" in "kube-system" namespace has status "Ready":"False"
	I0717 16:14:59.496723   94398 pod_ready.go:102] pod "metrics-server-74d5c6b9c-t6m6d" in "kube-system" namespace has status "Ready":"False"
	I0717 16:14:55.546072   93835 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 16:14:55.546321   93835 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 16:15:01.993489   94398 pod_ready.go:102] pod "metrics-server-74d5c6b9c-t6m6d" in "kube-system" namespace has status "Ready":"False"
	I0717 16:15:03.994633   94398 pod_ready.go:102] pod "metrics-server-74d5c6b9c-t6m6d" in "kube-system" namespace has status "Ready":"False"
	I0717 16:15:06.494966   94398 pod_ready.go:102] pod "metrics-server-74d5c6b9c-t6m6d" in "kube-system" namespace has status "Ready":"False"
	I0717 16:15:08.495373   94398 pod_ready.go:102] pod "metrics-server-74d5c6b9c-t6m6d" in "kube-system" namespace has status "Ready":"False"
	I0717 16:15:10.496153   94398 pod_ready.go:102] pod "metrics-server-74d5c6b9c-t6m6d" in "kube-system" namespace has status "Ready":"False"
	I0717 16:15:12.496917   94398 pod_ready.go:102] pod "metrics-server-74d5c6b9c-t6m6d" in "kube-system" namespace has status "Ready":"False"
	I0717 16:15:14.993707   94398 pod_ready.go:102] pod "metrics-server-74d5c6b9c-t6m6d" in "kube-system" namespace has status "Ready":"False"
	I0717 16:15:17.497371   94398 pod_ready.go:102] pod "metrics-server-74d5c6b9c-t6m6d" in "kube-system" namespace has status "Ready":"False"
	I0717 16:15:19.497441   94398 pod_ready.go:102] pod "metrics-server-74d5c6b9c-t6m6d" in "kube-system" namespace has status "Ready":"False"
	I0717 16:15:21.994863   94398 pod_ready.go:102] pod "metrics-server-74d5c6b9c-t6m6d" in "kube-system" namespace has status "Ready":"False"
	I0717 16:15:24.495154   94398 pod_ready.go:102] pod "metrics-server-74d5c6b9c-t6m6d" in "kube-system" namespace has status "Ready":"False"
	I0717 16:15:26.495640   94398 pod_ready.go:102] pod "metrics-server-74d5c6b9c-t6m6d" in "kube-system" namespace has status "Ready":"False"
	I0717 16:15:28.993646   94398 pod_ready.go:102] pod "metrics-server-74d5c6b9c-t6m6d" in "kube-system" namespace has status "Ready":"False"
	I0717 16:15:30.994681   94398 pod_ready.go:102] pod "metrics-server-74d5c6b9c-t6m6d" in "kube-system" namespace has status "Ready":"False"
	I0717 16:15:33.495709   94398 pod_ready.go:102] pod "metrics-server-74d5c6b9c-t6m6d" in "kube-system" namespace has status "Ready":"False"
	I0717 16:15:35.547474   93835 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0717 16:15:35.547687   93835 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0717 16:15:35.547702   93835 kubeadm.go:322] 
	I0717 16:15:35.547758   93835 kubeadm.go:322] Unfortunately, an error has occurred:
	I0717 16:15:35.547811   93835 kubeadm.go:322] 	timed out waiting for the condition
	I0717 16:15:35.547818   93835 kubeadm.go:322] 
	I0717 16:15:35.547889   93835 kubeadm.go:322] This error is likely caused by:
	I0717 16:15:35.547934   93835 kubeadm.go:322] 	- The kubelet is not running
	I0717 16:15:35.548051   93835 kubeadm.go:322] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0717 16:15:35.548058   93835 kubeadm.go:322] 
	I0717 16:15:35.548228   93835 kubeadm.go:322] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0717 16:15:35.548277   93835 kubeadm.go:322] 	- 'systemctl status kubelet'
	I0717 16:15:35.548335   93835 kubeadm.go:322] 	- 'journalctl -xeu kubelet'
	I0717 16:15:35.548352   93835 kubeadm.go:322] 
	I0717 16:15:35.548489   93835 kubeadm.go:322] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0717 16:15:35.548591   93835 kubeadm.go:322] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I0717 16:15:35.548686   93835 kubeadm.go:322] Here is one example how you may list all Kubernetes containers running in docker:
	I0717 16:15:35.548726   93835 kubeadm.go:322] 	- 'docker ps -a | grep kube | grep -v pause'
	I0717 16:15:35.548788   93835 kubeadm.go:322] 	Once you have found the failing container, you can inspect its logs with:
	I0717 16:15:35.548820   93835 kubeadm.go:322] 	- 'docker logs CONTAINERID'
	I0717 16:15:35.550483   93835 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0717 16:15:35.550555   93835 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0717 16:15:35.550662   93835 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.4. Latest validated version: 18.09
	I0717 16:15:35.550753   93835 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 16:15:35.550826   93835 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0717 16:15:35.550883   93835 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0717 16:15:35.550914   93835 kubeadm.go:406] StartCluster complete in 8m6.413956871s
	I0717 16:15:35.551001   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0717 16:15:35.574903   93835 logs.go:284] 0 containers: []
	W0717 16:15:35.574915   93835 logs.go:286] No container was found matching "kube-apiserver"
	I0717 16:15:35.575006   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0717 16:15:35.595318   93835 logs.go:284] 0 containers: []
	W0717 16:15:35.595334   93835 logs.go:286] No container was found matching "etcd"
	I0717 16:15:35.595398   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0717 16:15:35.614450   93835 logs.go:284] 0 containers: []
	W0717 16:15:35.614463   93835 logs.go:286] No container was found matching "coredns"
	I0717 16:15:35.614535   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0717 16:15:35.633854   93835 logs.go:284] 0 containers: []
	W0717 16:15:35.633868   93835 logs.go:286] No container was found matching "kube-scheduler"
	I0717 16:15:35.633941   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0717 16:15:35.654538   93835 logs.go:284] 0 containers: []
	W0717 16:15:35.654550   93835 logs.go:286] No container was found matching "kube-proxy"
	I0717 16:15:35.654616   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0717 16:15:35.673690   93835 logs.go:284] 0 containers: []
	W0717 16:15:35.673704   93835 logs.go:286] No container was found matching "kube-controller-manager"
	I0717 16:15:35.673768   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0717 16:15:35.692920   93835 logs.go:284] 0 containers: []
	W0717 16:15:35.692934   93835 logs.go:286] No container was found matching "kindnet"
	I0717 16:15:35.693003   93835 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I0717 16:15:35.712944   93835 logs.go:284] 0 containers: []
	W0717 16:15:35.712964   93835 logs.go:286] No container was found matching "kubernetes-dashboard"
	I0717 16:15:35.712974   93835 logs.go:123] Gathering logs for kubelet ...
	I0717 16:15:35.712988   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0717 16:15:35.753004   93835 logs.go:123] Gathering logs for dmesg ...
	I0717 16:15:35.753023   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0717 16:15:35.768094   93835 logs.go:123] Gathering logs for describe nodes ...
	I0717 16:15:35.768112   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0717 16:15:35.829397   93835 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0717 16:15:35.829409   93835 logs.go:123] Gathering logs for Docker ...
	I0717 16:15:35.829416   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0717 16:15:35.846538   93835 logs.go:123] Gathering logs for container status ...
	I0717 16:15:35.846553   93835 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0717 16:15:35.901689   93835 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.4. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0717 16:15:35.901715   93835 out.go:239] * 
	W0717 16:15:35.901777   93835 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.4. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0717 16:15:35.901816   93835 out.go:239] * 
	W0717 16:15:35.902485   93835 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0717 16:15:35.965205   93835 out.go:177] 
	W0717 16:15:36.007342   93835 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.4. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0717 16:15:36.007407   93835 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0717 16:15:36.007431   93835 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0717 16:15:36.070330   93835 out.go:177] 
	
	* 
	* ==> Docker <==
	* Jul 17 23:07:17 old-k8s-version-770000 dockerd[700]: time="2023-07-17T23:07:17.478847024Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jul 17 23:07:17 old-k8s-version-770000 dockerd[700]: time="2023-07-17T23:07:17.516483731Z" level=info msg="Loading containers: done."
	Jul 17 23:07:17 old-k8s-version-770000 dockerd[700]: time="2023-07-17T23:07:17.524970441Z" level=info msg="Docker daemon" commit=4ffc614 graphdriver=overlay2 version=24.0.4
	Jul 17 23:07:17 old-k8s-version-770000 dockerd[700]: time="2023-07-17T23:07:17.525064793Z" level=info msg="Daemon has completed initialization"
	Jul 17 23:07:17 old-k8s-version-770000 dockerd[700]: time="2023-07-17T23:07:17.550938254Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 17 23:07:17 old-k8s-version-770000 systemd[1]: Started Docker Application Container Engine.
	Jul 17 23:07:17 old-k8s-version-770000 dockerd[700]: time="2023-07-17T23:07:17.551209856Z" level=info msg="API listen on [::]:2376"
	Jul 17 23:07:25 old-k8s-version-770000 systemd[1]: Stopping Docker Application Container Engine...
	Jul 17 23:07:25 old-k8s-version-770000 dockerd[700]: time="2023-07-17T23:07:25.289287062Z" level=info msg="Processing signal 'terminated'"
	Jul 17 23:07:25 old-k8s-version-770000 dockerd[700]: time="2023-07-17T23:07:25.290215789Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 17 23:07:25 old-k8s-version-770000 dockerd[700]: time="2023-07-17T23:07:25.290601746Z" level=info msg="Daemon shutdown complete"
	Jul 17 23:07:25 old-k8s-version-770000 dockerd[700]: time="2023-07-17T23:07:25.290721710Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 17 23:07:25 old-k8s-version-770000 systemd[1]: docker.service: Deactivated successfully.
	Jul 17 23:07:25 old-k8s-version-770000 systemd[1]: Stopped Docker Application Container Engine.
	Jul 17 23:07:25 old-k8s-version-770000 systemd[1]: Starting Docker Application Container Engine...
	Jul 17 23:07:25 old-k8s-version-770000 dockerd[922]: time="2023-07-17T23:07:25.347878934Z" level=info msg="Starting up"
	Jul 17 23:07:25 old-k8s-version-770000 dockerd[922]: time="2023-07-17T23:07:25.356497542Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 17 23:07:25 old-k8s-version-770000 dockerd[922]: time="2023-07-17T23:07:25.550910373Z" level=info msg="Loading containers: start."
	Jul 17 23:07:25 old-k8s-version-770000 dockerd[922]: time="2023-07-17T23:07:25.658745716Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jul 17 23:07:25 old-k8s-version-770000 dockerd[922]: time="2023-07-17T23:07:25.695384146Z" level=info msg="Loading containers: done."
	Jul 17 23:07:25 old-k8s-version-770000 dockerd[922]: time="2023-07-17T23:07:25.705046377Z" level=info msg="Docker daemon" commit=4ffc614 graphdriver=overlay2 version=24.0.4
	Jul 17 23:07:25 old-k8s-version-770000 dockerd[922]: time="2023-07-17T23:07:25.705110433Z" level=info msg="Daemon has completed initialization"
	Jul 17 23:07:25 old-k8s-version-770000 dockerd[922]: time="2023-07-17T23:07:25.731708392Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 17 23:07:25 old-k8s-version-770000 dockerd[922]: time="2023-07-17T23:07:25.731827717Z" level=info msg="API listen on [::]:2376"
	Jul 17 23:07:25 old-k8s-version-770000 systemd[1]: Started Docker Application Container Engine.
	
	* 
	* ==> container status <==
	* CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
	time="2023-07-17T23:15:37Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/dockershim.sock: connect: no such file or directory\""
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> kernel <==
	*  23:15:38 up  7:14,  0 users,  load average: 0.48, 0.81, 1.19
	Linux old-k8s-version-770000 5.15.49-linuxkit-pr #1 SMP Thu May 25 07:17:40 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.2 LTS"
	
	* 
	* ==> kubelet <==
	* Jul 17 23:15:36 old-k8s-version-770000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jul 17 23:15:37 old-k8s-version-770000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 156.
	Jul 17 23:15:37 old-k8s-version-770000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jul 17 23:15:37 old-k8s-version-770000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jul 17 23:15:37 old-k8s-version-770000 kubelet[16692]: I0717 23:15:37.335103   16692 server.go:410] Version: v1.16.0
	Jul 17 23:15:37 old-k8s-version-770000 kubelet[16692]: I0717 23:15:37.335517   16692 plugins.go:100] No cloud provider specified.
	Jul 17 23:15:37 old-k8s-version-770000 kubelet[16692]: I0717 23:15:37.335557   16692 server.go:773] Client rotation is on, will bootstrap in background
	Jul 17 23:15:37 old-k8s-version-770000 kubelet[16692]: I0717 23:15:37.337521   16692 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jul 17 23:15:37 old-k8s-version-770000 kubelet[16692]: W0717 23:15:37.338273   16692 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Jul 17 23:15:37 old-k8s-version-770000 kubelet[16692]: W0717 23:15:37.338342   16692 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Jul 17 23:15:37 old-k8s-version-770000 kubelet[16692]: F0717 23:15:37.338368   16692 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Jul 17 23:15:37 old-k8s-version-770000 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jul 17 23:15:37 old-k8s-version-770000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jul 17 23:15:37 old-k8s-version-770000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 157.
	Jul 17 23:15:37 old-k8s-version-770000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jul 17 23:15:37 old-k8s-version-770000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jul 17 23:15:38 old-k8s-version-770000 kubelet[16803]: I0717 23:15:38.082816   16803 server.go:410] Version: v1.16.0
	Jul 17 23:15:38 old-k8s-version-770000 kubelet[16803]: I0717 23:15:38.083020   16803 plugins.go:100] No cloud provider specified.
	Jul 17 23:15:38 old-k8s-version-770000 kubelet[16803]: I0717 23:15:38.083030   16803 server.go:773] Client rotation is on, will bootstrap in background
	Jul 17 23:15:38 old-k8s-version-770000 kubelet[16803]: I0717 23:15:38.084751   16803 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jul 17 23:15:38 old-k8s-version-770000 kubelet[16803]: W0717 23:15:38.085385   16803 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Jul 17 23:15:38 old-k8s-version-770000 kubelet[16803]: W0717 23:15:38.085455   16803 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Jul 17 23:15:38 old-k8s-version-770000 kubelet[16803]: F0717 23:15:38.085490   16803 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Jul 17 23:15:38 old-k8s-version-770000 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jul 17 23:15:38 old-k8s-version-770000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 16:15:37.864907   94589 logs.go:195] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-770000 -n old-k8s-version-770000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-770000 -n old-k8s-version-770000: exit status 2 (411.038396ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-770000" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (508.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.16s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0717 16:15:45.102624   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/no-preload-042000/client.crt: no such file or directory
E0717 16:15:45.805336   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/functional-554000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57352/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0717 16:15:50.459137   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/false-679000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57352/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0717 16:16:08.773709   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/calico-679000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57352/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57352/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57352/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57352/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57352/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0717 16:16:50.637428   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/flannel-679000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57352/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57352/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57352/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57352/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57352/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57352/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57352/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0717 16:18:01.259369   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/no-preload-042000/client.crt: no such file or directory
E0717 16:18:01.761056   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/kindnet-679000/client.crt: no such file or directory
E0717 16:18:02.905777   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/bridge-679000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57352/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0717 16:18:11.584924   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/enable-default-cni-679000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57352/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0717 16:18:25.691051   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/skaffold-258000/client.crt: no such file or directory
E0717 16:18:28.945043   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/no-preload-042000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57352/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57352/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57352/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57352/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0717 16:19:00.897471   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/auto-679000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57352/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57352/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0717 16:19:24.807921   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/kindnet-679000/client.crt: no such file or directory
E0717 16:19:25.951865   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/bridge-679000/client.crt: no such file or directory
E0717 16:19:27.172778   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/kubenet-679000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57352/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0717 16:19:35.019477   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/custom-flannel-679000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57352/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57352/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57352/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57352/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57352/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0717 16:20:27.594518   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/flannel-679000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57352/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0717 16:20:33.927722   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/addons-230000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57352/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0717 16:20:45.808510   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/functional-554000/client.crt: no such file or directory
E0717 16:20:50.267850   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/kubenet-679000/client.crt: no such file or directory
E0717 16:20:50.463167   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/false-679000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57352/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0717 16:20:58.132913   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/custom-flannel-679000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57352/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0717 16:21:08.776846   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/calico-679000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57352/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57352/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0717 16:21:28.779085   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/skaffold-258000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57352/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57352/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0717 16:21:48.534007   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/enable-default-cni-679000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57352/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0717 16:21:56.991435   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/addons-230000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57352/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57352/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0717 16:22:13.516315   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/false-679000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57352/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57352/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0717 16:22:31.828869   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/calico-679000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57352/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57352/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57352/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0717 16:23:01.272886   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/no-preload-042000/client.crt: no such file or directory
E0717 16:23:01.775745   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/kindnet-679000/client.crt: no such file or directory
E0717 16:23:02.921935   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/bridge-679000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57352/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57352/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0717 16:23:25.723782   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/skaffold-258000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57352/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57352/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57352/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57352/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57352/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57352/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0717 16:24:27.211927   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/kubenet-679000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57352/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0717 16:24:35.059147   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/custom-flannel-679000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded - error from a previous attempt: EOF
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-770000 -n old-k8s-version-770000
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-770000 -n old-k8s-version-770000: exit status 2 (381.864311ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "old-k8s-version-770000" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-770000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-770000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6129a8b881bae9c9b14658d603684185fa98d048ad62c6ede03346a49e6e2b56",
	        "Created": "2023-07-17T23:01:29.298658175Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1241282,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-07-17T23:07:11.601689472Z",
	            "FinishedAt": "2023-07-17T23:07:08.838461805Z"
	        },
	        "Image": "sha256:c6cc01e6091959400f260dc442708e7c71630b58dab1f7c344cb00926bd84950",
	        "ResolvConfPath": "/var/lib/docker/containers/6129a8b881bae9c9b14658d603684185fa98d048ad62c6ede03346a49e6e2b56/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6129a8b881bae9c9b14658d603684185fa98d048ad62c6ede03346a49e6e2b56/hostname",
	        "HostsPath": "/var/lib/docker/containers/6129a8b881bae9c9b14658d603684185fa98d048ad62c6ede03346a49e6e2b56/hosts",
	        "LogPath": "/var/lib/docker/containers/6129a8b881bae9c9b14658d603684185fa98d048ad62c6ede03346a49e6e2b56/6129a8b881bae9c9b14658d603684185fa98d048ad62c6ede03346a49e6e2b56-json.log",
	        "Name": "/old-k8s-version-770000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "old-k8s-version-770000:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-770000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/a038e5269af2118ee927b0485208b6f3b1d1f1a742907462c43ed3f30ca09e24-init/diff:/var/lib/docker/overlay2/388817d1807139a2b5fe2987f16fc65d58f6720a0b0343097a59eb837a278a0e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a038e5269af2118ee927b0485208b6f3b1d1f1a742907462c43ed3f30ca09e24/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a038e5269af2118ee927b0485208b6f3b1d1f1a742907462c43ed3f30ca09e24/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a038e5269af2118ee927b0485208b6f3b1d1f1a742907462c43ed3f30ca09e24/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-770000",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-770000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-770000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-770000",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-770000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "07494e311db80b6eb897208f0309b8eb9434435b8000ecbc8c45045c67b478ce",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57348"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57349"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57350"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57351"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57352"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/07494e311db8",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-770000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "6129a8b881ba",
	                        "old-k8s-version-770000"
	                    ],
	                    "NetworkID": "e0b81b03df244d0caf05aedc1b790fca29cd02fdbba810fc90a219bab32afcb3",
	                    "EndpointID": "c032de70445ab8aa7fa6e42f3ed33666738d73429096f11ff6d7816e52abc659",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-770000 -n old-k8s-version-770000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-770000 -n old-k8s-version-770000: exit status 2 (386.656648ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p old-k8s-version-770000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p old-k8s-version-770000 logs -n 25: (1.388558373s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| unpause | -p no-preload-042000                                   | no-preload-042000            | jenkins | v1.31.0 | 17 Jul 23 16:09 PDT | 17 Jul 23 16:09 PDT |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p no-preload-042000                                   | no-preload-042000            | jenkins | v1.31.0 | 17 Jul 23 16:09 PDT | 17 Jul 23 16:09 PDT |
	| delete  | -p no-preload-042000                                   | no-preload-042000            | jenkins | v1.31.0 | 17 Jul 23 16:09 PDT | 17 Jul 23 16:09 PDT |
	| start   | -p embed-certs-306000                                  | embed-certs-306000           | jenkins | v1.31.0 | 17 Jul 23 16:09 PDT | 17 Jul 23 16:10 PDT |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=docker                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-306000            | embed-certs-306000           | jenkins | v1.31.0 | 17 Jul 23 16:10 PDT | 17 Jul 23 16:10 PDT |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-306000                                  | embed-certs-306000           | jenkins | v1.31.0 | 17 Jul 23 16:10 PDT | 17 Jul 23 16:10 PDT |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-306000                 | embed-certs-306000           | jenkins | v1.31.0 | 17 Jul 23 16:10 PDT | 17 Jul 23 16:10 PDT |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-306000                                  | embed-certs-306000           | jenkins | v1.31.0 | 17 Jul 23 16:10 PDT | 17 Jul 23 16:16 PDT |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=docker                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                           |                              |         |         |                     |                     |
	| ssh     | -p embed-certs-306000 sudo                             | embed-certs-306000           | jenkins | v1.31.0 | 17 Jul 23 16:16 PDT | 17 Jul 23 16:16 PDT |
	|         | crictl images -o json                                  |                              |         |         |                     |                     |
	| pause   | -p embed-certs-306000                                  | embed-certs-306000           | jenkins | v1.31.0 | 17 Jul 23 16:16 PDT | 17 Jul 23 16:16 PDT |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p embed-certs-306000                                  | embed-certs-306000           | jenkins | v1.31.0 | 17 Jul 23 16:16 PDT | 17 Jul 23 16:16 PDT |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p embed-certs-306000                                  | embed-certs-306000           | jenkins | v1.31.0 | 17 Jul 23 16:16 PDT | 17 Jul 23 16:16 PDT |
	| delete  | -p embed-certs-306000                                  | embed-certs-306000           | jenkins | v1.31.0 | 17 Jul 23 16:16 PDT | 17 Jul 23 16:16 PDT |
	| delete  | -p                                                     | disable-driver-mounts-278000 | jenkins | v1.31.0 | 17 Jul 23 16:16 PDT | 17 Jul 23 16:16 PDT |
	|         | disable-driver-mounts-278000                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-651000 | jenkins | v1.31.0 | 17 Jul 23 16:16 PDT | 17 Jul 23 16:17 PDT |
	|         | default-k8s-diff-port-651000                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-651000  | default-k8s-diff-port-651000 | jenkins | v1.31.0 | 17 Jul 23 16:17 PDT | 17 Jul 23 16:17 PDT |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-651000 | jenkins | v1.31.0 | 17 Jul 23 16:17 PDT | 17 Jul 23 16:18 PDT |
	|         | default-k8s-diff-port-651000                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-651000       | default-k8s-diff-port-651000 | jenkins | v1.31.0 | 17 Jul 23 16:18 PDT | 17 Jul 23 16:18 PDT |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-651000 | jenkins | v1.31.0 | 17 Jul 23 16:18 PDT | 17 Jul 23 16:23 PDT |
	|         | default-k8s-diff-port-651000                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                           |                              |         |         |                     |                     |
	| ssh     | -p                                                     | default-k8s-diff-port-651000 | jenkins | v1.31.0 | 17 Jul 23 16:24 PDT | 17 Jul 23 16:24 PDT |
	|         | default-k8s-diff-port-651000                           |                              |         |         |                     |                     |
	|         | sudo crictl images -o json                             |                              |         |         |                     |                     |
	| pause   | -p                                                     | default-k8s-diff-port-651000 | jenkins | v1.31.0 | 17 Jul 23 16:24 PDT | 17 Jul 23 16:24 PDT |
	|         | default-k8s-diff-port-651000                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p                                                     | default-k8s-diff-port-651000 | jenkins | v1.31.0 | 17 Jul 23 16:24 PDT | 17 Jul 23 16:24 PDT |
	|         | default-k8s-diff-port-651000                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-651000 | jenkins | v1.31.0 | 17 Jul 23 16:24 PDT | 17 Jul 23 16:24 PDT |
	|         | default-k8s-diff-port-651000                           |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-651000 | jenkins | v1.31.0 | 17 Jul 23 16:24 PDT | 17 Jul 23 16:24 PDT |
	|         | default-k8s-diff-port-651000                           |                              |         |         |                     |                     |
	| start   | -p newest-cni-958000 --memory=2200 --alsologtostderr   | newest-cni-958000            | jenkins | v1.31.0 | 17 Jul 23 16:24 PDT |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=docker  --kubernetes-version=v1.27.3          |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/07/17 16:24:18
	Running on machine: MacOS-Agent-1
	Binary: Built with gc go1.20.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 16:24:18.186809   95328 out.go:296] Setting OutFile to fd 1 ...
	I0717 16:24:18.187114   95328 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 16:24:18.187119   95328 out.go:309] Setting ErrFile to fd 2...
	I0717 16:24:18.187123   95328 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 16:24:18.187403   95328 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/16899-76867/.minikube/bin
	I0717 16:24:18.189134   95328 out.go:303] Setting JSON to false
	I0717 16:24:18.208600   95328 start.go:128] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":26626,"bootTime":1689609632,"procs":441,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0717 16:24:18.208765   95328 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0717 16:24:18.230604   95328 out.go:177] * [newest-cni-958000] minikube v1.31.0 on Darwin 13.4.1
	I0717 16:24:18.294397   95328 out.go:177]   - MINIKUBE_LOCATION=16899
	I0717 16:24:18.273492   95328 notify.go:220] Checking for updates...
	I0717 16:24:18.336239   95328 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16899-76867/kubeconfig
	I0717 16:24:18.380426   95328 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0717 16:24:18.422256   95328 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 16:24:18.464269   95328 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16899-76867/.minikube
	I0717 16:24:18.485491   95328 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 16:24:18.507256   95328 config.go:182] Loaded profile config "old-k8s-version-770000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0717 16:24:18.507408   95328 driver.go:373] Setting default libvirt URI to qemu:///system
	I0717 16:24:18.564796   95328 docker.go:121] docker version: linux-24.0.2:Docker Desktop 4.21.1 (114176)
	I0717 16:24:18.564919   95328 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 16:24:18.669320   95328 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:56 OomKillDisable:false NGoroutines:68 SystemTime:2023-07-17 23:24:18.656864067 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:5.15.49-linuxkit-pr OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexS
erverAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=built
in name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Pat
h:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.6] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Command line tool for Docker Scout Vendor:Docker Inc. Version:0.16.1]] Warnings:<n
il>}}
	I0717 16:24:18.691151   95328 out.go:177] * Using the docker driver based on user configuration
	I0717 16:24:18.732901   95328 start.go:298] selected driver: docker
	I0717 16:24:18.732931   95328 start.go:880] validating driver "docker" against <nil>
	I0717 16:24:18.732949   95328 start.go:891] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 16:24:18.736953   95328 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 16:24:18.849983   95328 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:56 OomKillDisable:false NGoroutines:68 SystemTime:2023-07-17 23:24:18.835401121 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:5.15.49-linuxkit-pr OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexS
erverAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=built
in name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Pat
h:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.6] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Command line tool for Docker Scout Vendor:Docker Inc. Version:0.16.1]] Warnings:<n
il>}}
	I0717 16:24:18.850146   95328 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	W0717 16:24:18.850179   95328 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0717 16:24:18.850361   95328 start_flags.go:938] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0717 16:24:18.892528   95328 out.go:177] * Using Docker Desktop driver with root privileges
	I0717 16:24:18.913579   95328 cni.go:84] Creating CNI manager for ""
	I0717 16:24:18.913615   95328 cni.go:149] "docker" driver + "docker" runtime found, recommending kindnet
	I0717 16:24:18.913628   95328 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0717 16:24:18.913654   95328 start_flags.go:319] config:
	{Name:newest-cni-958000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:newest-cni-958000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: Networ
kPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 16:24:18.956719   95328 out.go:177] * Starting control plane node newest-cni-958000 in cluster newest-cni-958000
	I0717 16:24:18.978707   95328 cache.go:122] Beginning downloading kic base image for docker with docker
	I0717 16:24:18.999483   95328 out.go:177] * Pulling base image ...
	I0717 16:24:19.041848   95328 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0717 16:24:19.041891   95328 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon
	I0717 16:24:19.041974   95328 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16899-76867/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-amd64.tar.lz4
	I0717 16:24:19.041999   95328 cache.go:57] Caching tarball of preloaded images
	I0717 16:24:19.042209   95328 preload.go:174] Found /Users/jenkins/minikube-integration/16899-76867/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0717 16:24:19.042233   95328 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0717 16:24:19.043224   95328 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/newest-cni-958000/config.json ...
	I0717 16:24:19.043378   95328 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/newest-cni-958000/config.json: {Name:mke7beaf94a6d7a240d455c7cc0409c37a0d188c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 16:24:19.096443   95328 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon, skipping pull
	I0717 16:24:19.096495   95328 cache.go:145] gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 exists in daemon, skipping load
	I0717 16:24:19.096513   95328 cache.go:195] Successfully downloaded all kic artifacts
	I0717 16:24:19.096735   95328 start.go:365] acquiring machines lock for newest-cni-958000: {Name:mke5d528d9e88e8bdafae9a78be680113515a9a9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 16:24:19.096986   95328 start.go:369] acquired machines lock for "newest-cni-958000" in 237.82µs
	I0717 16:24:19.097032   95328 start.go:93] Provisioning new machine with config: &{Name:newest-cni-958000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:newest-cni-958000 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0717 16:24:19.097145   95328 start.go:125] createHost starting for "" (driver="docker")
	I0717 16:24:19.118714   95328 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0717 16:24:19.119111   95328 start.go:159] libmachine.API.Create for "newest-cni-958000" (driver="docker")
	I0717 16:24:19.119158   95328 client.go:168] LocalClient.Create starting
	I0717 16:24:19.120281   95328 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16899-76867/.minikube/certs/ca.pem
	I0717 16:24:19.120392   95328 main.go:141] libmachine: Decoding PEM data...
	I0717 16:24:19.120438   95328 main.go:141] libmachine: Parsing certificate...
	I0717 16:24:19.120565   95328 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16899-76867/.minikube/certs/cert.pem
	I0717 16:24:19.120645   95328 main.go:141] libmachine: Decoding PEM data...
	I0717 16:24:19.120685   95328 main.go:141] libmachine: Parsing certificate...
	I0717 16:24:19.140184   95328 cli_runner.go:164] Run: docker network inspect newest-cni-958000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0717 16:24:19.194696   95328 cli_runner.go:211] docker network inspect newest-cni-958000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0717 16:24:19.194815   95328 network_create.go:281] running [docker network inspect newest-cni-958000] to gather additional debugging logs...
	I0717 16:24:19.194834   95328 cli_runner.go:164] Run: docker network inspect newest-cni-958000
	W0717 16:24:19.248240   95328 cli_runner.go:211] docker network inspect newest-cni-958000 returned with exit code 1
	I0717 16:24:19.248264   95328 network_create.go:284] error running [docker network inspect newest-cni-958000]: docker network inspect newest-cni-958000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-958000 not found
	I0717 16:24:19.248285   95328 network_create.go:286] output of [docker network inspect newest-cni-958000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-958000 not found
	
	** /stderr **
	I0717 16:24:19.248361   95328 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0717 16:24:19.302040   95328 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0717 16:24:19.302398   95328 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0010eaf80}
	I0717 16:24:19.302416   95328 network_create.go:123] attempt to create docker network newest-cni-958000 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 65535 ...
	I0717 16:24:19.302493   95328 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-958000 newest-cni-958000
	W0717 16:24:19.355392   95328 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-958000 newest-cni-958000 returned with exit code 1
	W0717 16:24:19.355436   95328 network_create.go:148] failed to create docker network newest-cni-958000 192.168.58.0/24 with gateway 192.168.58.1 and mtu of 65535: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-958000 newest-cni-958000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0717 16:24:19.355458   95328 network_create.go:115] failed to create docker network newest-cni-958000 192.168.58.0/24, will retry: subnet is taken
	I0717 16:24:19.356792   95328 network.go:212] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0717 16:24:19.357095   95328 network.go:209] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0010ebdd0}
	I0717 16:24:19.357106   95328 network_create.go:123] attempt to create docker network newest-cni-958000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 65535 ...
	I0717 16:24:19.357179   95328 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-958000 newest-cni-958000
	I0717 16:24:19.444497   95328 network_create.go:107] docker network newest-cni-958000 192.168.67.0/24 created
	I0717 16:24:19.444530   95328 kic.go:117] calculated static IP "192.168.67.2" for the "newest-cni-958000" container
	I0717 16:24:19.444645   95328 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0717 16:24:19.499183   95328 cli_runner.go:164] Run: docker volume create newest-cni-958000 --label name.minikube.sigs.k8s.io=newest-cni-958000 --label created_by.minikube.sigs.k8s.io=true
	I0717 16:24:19.552742   95328 oci.go:103] Successfully created a docker volume newest-cni-958000
	I0717 16:24:19.552887   95328 cli_runner.go:164] Run: docker run --rm --name newest-cni-958000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-958000 --entrypoint /usr/bin/test -v newest-cni-958000:/var gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -d /var/lib
	I0717 16:24:20.043780   95328 oci.go:107] Successfully prepared a docker volume newest-cni-958000
	I0717 16:24:20.043810   95328 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0717 16:24:20.043826   95328 kic.go:190] Starting extracting preloaded images to volume ...
	I0717 16:24:20.043943   95328 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/16899-76867/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-958000:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -I lz4 -xf /preloaded.tar -C /extractDir
	I0717 16:24:23.042645   95328 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/16899-76867/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-958000:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 -I lz4 -xf /preloaded.tar -C /extractDir: (2.99854572s)
	I0717 16:24:23.042674   95328 kic.go:199] duration metric: took 2.998774 seconds to extract preloaded images to volume
	I0717 16:24:23.042790   95328 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0717 16:24:23.143292   95328 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-958000 --name newest-cni-958000 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-958000 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-958000 --network newest-cni-958000 --ip 192.168.67.2 --volume newest-cni-958000:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631
	I0717 16:24:23.436003   95328 cli_runner.go:164] Run: docker container inspect newest-cni-958000 --format={{.State.Running}}
	I0717 16:24:23.490857   95328 cli_runner.go:164] Run: docker container inspect newest-cni-958000 --format={{.State.Status}}
	I0717 16:24:23.548066   95328 cli_runner.go:164] Run: docker exec newest-cni-958000 stat /var/lib/dpkg/alternatives/iptables
	I0717 16:24:23.667828   95328 oci.go:144] the created container "newest-cni-958000" has a running status.
	I0717 16:24:23.667874   95328 kic.go:221] Creating ssh key for kic: /Users/jenkins/minikube-integration/16899-76867/.minikube/machines/newest-cni-958000/id_rsa...
	I0717 16:24:23.727940   95328 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/16899-76867/.minikube/machines/newest-cni-958000/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0717 16:24:23.794621   95328 cli_runner.go:164] Run: docker container inspect newest-cni-958000 --format={{.State.Status}}
	I0717 16:24:23.851156   95328 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0717 16:24:23.851210   95328 kic_runner.go:114] Args: [docker exec --privileged newest-cni-958000 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0717 16:24:23.950948   95328 cli_runner.go:164] Run: docker container inspect newest-cni-958000 --format={{.State.Status}}
	I0717 16:24:24.006871   95328 machine.go:88] provisioning docker machine ...
	I0717 16:24:24.006918   95328 ubuntu.go:169] provisioning hostname "newest-cni-958000"
	I0717 16:24:24.007031   95328 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-958000
	I0717 16:24:24.063933   95328 main.go:141] libmachine: Using SSH client type: native
	I0717 16:24:24.064360   95328 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140d2c0] 0x1410360 <nil>  [] 0s} 127.0.0.1 58310 <nil> <nil>}
	I0717 16:24:24.064381   95328 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-958000 && echo "newest-cni-958000" | sudo tee /etc/hostname
	I0717 16:24:24.208122   95328 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-958000
	
	I0717 16:24:24.208227   95328 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-958000
	I0717 16:24:24.261641   95328 main.go:141] libmachine: Using SSH client type: native
	I0717 16:24:24.261996   95328 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140d2c0] 0x1410360 <nil>  [] 0s} 127.0.0.1 58310 <nil> <nil>}
	I0717 16:24:24.262024   95328 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-958000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-958000/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-958000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 16:24:24.393650   95328 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 16:24:24.393673   95328 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/16899-76867/.minikube CaCertPath:/Users/jenkins/minikube-integration/16899-76867/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/16899-76867/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/16899-76867/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/16899-76867/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/16899-76867/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/16899-76867/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/16899-76867/.minikube}
	I0717 16:24:24.393693   95328 ubuntu.go:177] setting up certificates
	I0717 16:24:24.393699   95328 provision.go:83] configureAuth start
	I0717 16:24:24.393784   95328 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-958000
	I0717 16:24:24.446370   95328 provision.go:138] copyHostCerts
	I0717 16:24:24.446501   95328 exec_runner.go:144] found /Users/jenkins/minikube-integration/16899-76867/.minikube/key.pem, removing ...
	I0717 16:24:24.446520   95328 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/16899-76867/.minikube/key.pem
	I0717 16:24:24.446638   95328 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16899-76867/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/16899-76867/.minikube/key.pem (1675 bytes)
	I0717 16:24:24.446848   95328 exec_runner.go:144] found /Users/jenkins/minikube-integration/16899-76867/.minikube/ca.pem, removing ...
	I0717 16:24:24.446854   95328 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/16899-76867/.minikube/ca.pem
	I0717 16:24:24.446931   95328 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16899-76867/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/16899-76867/.minikube/ca.pem (1078 bytes)
	I0717 16:24:24.447102   95328 exec_runner.go:144] found /Users/jenkins/minikube-integration/16899-76867/.minikube/cert.pem, removing ...
	I0717 16:24:24.447108   95328 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/16899-76867/.minikube/cert.pem
	I0717 16:24:24.447170   95328 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16899-76867/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/16899-76867/.minikube/cert.pem (1123 bytes)
	I0717 16:24:24.447309   95328 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/16899-76867/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/16899-76867/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/16899-76867/.minikube/certs/ca-key.pem org=jenkins.newest-cni-958000 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube newest-cni-958000]
	I0717 16:24:24.562631   95328 provision.go:172] copyRemoteCerts
	I0717 16:24:24.562703   95328 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 16:24:24.562760   95328 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-958000
	I0717 16:24:24.616299   95328 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58310 SSHKeyPath:/Users/jenkins/minikube-integration/16899-76867/.minikube/machines/newest-cni-958000/id_rsa Username:docker}
	I0717 16:24:24.709547   95328 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0717 16:24:24.731857   95328 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 16:24:24.753921   95328 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0717 16:24:24.776978   95328 provision.go:86] duration metric: configureAuth took 383.2571ms
	I0717 16:24:24.776992   95328 ubuntu.go:193] setting minikube options for container-runtime
	I0717 16:24:24.777228   95328 config.go:182] Loaded profile config "newest-cni-958000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0717 16:24:24.777322   95328 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-958000
	I0717 16:24:24.829787   95328 main.go:141] libmachine: Using SSH client type: native
	I0717 16:24:24.830319   95328 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140d2c0] 0x1410360 <nil>  [] 0s} 127.0.0.1 58310 <nil> <nil>}
	I0717 16:24:24.830334   95328 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0717 16:24:24.959323   95328 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0717 16:24:24.959340   95328 ubuntu.go:71] root file system type: overlay
	I0717 16:24:24.959485   95328 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0717 16:24:24.959588   95328 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-958000
	I0717 16:24:25.013461   95328 main.go:141] libmachine: Using SSH client type: native
	I0717 16:24:25.013811   95328 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140d2c0] 0x1410360 <nil>  [] 0s} 127.0.0.1 58310 <nil> <nil>}
	I0717 16:24:25.013868   95328 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0717 16:24:25.152257   95328 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0717 16:24:25.152448   95328 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-958000
	I0717 16:24:25.205288   95328 main.go:141] libmachine: Using SSH client type: native
	I0717 16:24:25.205654   95328 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140d2c0] 0x1410360 <nil>  [] 0s} 127.0.0.1 58310 <nil> <nil>}
	I0717 16:24:25.205668   95328 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0717 16:24:25.858231   95328 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2023-07-07 14:50:55.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-07-17 23:24:25.150338387 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0717 16:24:25.858257   95328 machine.go:91] provisioned docker machine in 1.851320263s
	I0717 16:24:25.858292   95328 client.go:171] LocalClient.Create took 6.738959358s
	I0717 16:24:25.858319   95328 start.go:167] duration metric: libmachine.API.Create for "newest-cni-958000" took 6.739046228s
	I0717 16:24:25.858330   95328 start.go:300] post-start starting for "newest-cni-958000" (driver="docker")
	I0717 16:24:25.858347   95328 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 16:24:25.858413   95328 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 16:24:25.858474   95328 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-958000
	I0717 16:24:25.913515   95328 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58310 SSHKeyPath:/Users/jenkins/minikube-integration/16899-76867/.minikube/machines/newest-cni-958000/id_rsa Username:docker}
	I0717 16:24:26.007189   95328 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 16:24:26.012219   95328 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0717 16:24:26.012248   95328 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0717 16:24:26.012256   95328 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0717 16:24:26.012262   95328 info.go:137] Remote host: Ubuntu 22.04.2 LTS
	I0717 16:24:26.012272   95328 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16899-76867/.minikube/addons for local assets ...
	I0717 16:24:26.012368   95328 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16899-76867/.minikube/files for local assets ...
	I0717 16:24:26.012537   95328 filesync.go:149] local asset: /Users/jenkins/minikube-integration/16899-76867/.minikube/files/etc/ssl/certs/773242.pem -> 773242.pem in /etc/ssl/certs
	I0717 16:24:26.012746   95328 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 16:24:26.021540   95328 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/files/etc/ssl/certs/773242.pem --> /etc/ssl/certs/773242.pem (1708 bytes)
	I0717 16:24:26.043488   95328 start.go:303] post-start completed in 185.129009ms
	I0717 16:24:26.052682   95328 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-958000
	I0717 16:24:26.104023   95328 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/newest-cni-958000/config.json ...
	I0717 16:24:26.104487   95328 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 16:24:26.104550   95328 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-958000
	I0717 16:24:26.156327   95328 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58310 SSHKeyPath:/Users/jenkins/minikube-integration/16899-76867/.minikube/machines/newest-cni-958000/id_rsa Username:docker}
	I0717 16:24:26.246974   95328 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0717 16:24:26.252219   95328 start.go:128] duration metric: createHost completed in 7.154875065s
	I0717 16:24:26.252237   95328 start.go:83] releasing machines lock for "newest-cni-958000", held for 7.155068659s
	I0717 16:24:26.252319   95328 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-958000
	I0717 16:24:26.305244   95328 ssh_runner.go:195] Run: cat /version.json
	I0717 16:24:26.305263   95328 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 16:24:26.305350   95328 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-958000
	I0717 16:24:26.305350   95328 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-958000
	I0717 16:24:26.366991   95328 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58310 SSHKeyPath:/Users/jenkins/minikube-integration/16899-76867/.minikube/machines/newest-cni-958000/id_rsa Username:docker}
	I0717 16:24:26.366990   95328 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58310 SSHKeyPath:/Users/jenkins/minikube-integration/16899-76867/.minikube/machines/newest-cni-958000/id_rsa Username:docker}
	I0717 16:24:26.574168   95328 ssh_runner.go:195] Run: systemctl --version
	I0717 16:24:26.579828   95328 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0717 16:24:26.585248   95328 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0717 16:24:26.609060   95328 cni.go:236] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0717 16:24:26.609176   95328 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 16:24:26.633187   95328 cni.go:268] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0717 16:24:26.633200   95328 start.go:466] detecting cgroup driver to use...
	I0717 16:24:26.633214   95328 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0717 16:24:26.633331   95328 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 16:24:26.649193   95328 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0717 16:24:26.658927   95328 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0717 16:24:26.668372   95328 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0717 16:24:26.668441   95328 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0717 16:24:26.678771   95328 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0717 16:24:26.688965   95328 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0717 16:24:26.698839   95328 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0717 16:24:26.708863   95328 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 16:24:26.718323   95328 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0717 16:24:26.728397   95328 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 16:24:26.737471   95328 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 16:24:26.746220   95328 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 16:24:26.819417   95328 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0717 16:24:26.890172   95328 start.go:466] detecting cgroup driver to use...
	I0717 16:24:26.890192   95328 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0717 16:24:26.890259   95328 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0717 16:24:26.903523   95328 cruntime.go:276] skipping containerd shutdown because we are bound to it
	I0717 16:24:26.903598   95328 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0717 16:24:26.916387   95328 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 16:24:26.937805   95328 ssh_runner.go:195] Run: which cri-dockerd
	I0717 16:24:26.942888   95328 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0717 16:24:26.953671   95328 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0717 16:24:26.973645   95328 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0717 16:24:27.084429   95328 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0717 16:24:27.181373   95328 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0717 16:24:27.181389   95328 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0717 16:24:27.198952   95328 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 16:24:27.288573   95328 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0717 16:24:27.526191   95328 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0717 16:24:27.599052   95328 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0717 16:24:27.671482   95328 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0717 16:24:27.744672   95328 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 16:24:27.808582   95328 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0717 16:24:27.836741   95328 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 16:24:27.903378   95328 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0717 16:24:27.974883   95328 start.go:513] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0717 16:24:27.974992   95328 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0717 16:24:27.979792   95328 start.go:534] Will wait 60s for crictl version
	I0717 16:24:27.979861   95328 ssh_runner.go:195] Run: which crictl
	I0717 16:24:27.984292   95328 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 16:24:28.030341   95328 start.go:550] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.4
	RuntimeApiVersion:  v1
	I0717 16:24:28.030430   95328 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0717 16:24:28.056459   95328 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0717 16:24:28.129826   95328 out.go:204] * Preparing Kubernetes v1.27.3 on Docker 24.0.4 ...
	I0717 16:24:28.130035   95328 cli_runner.go:164] Run: docker exec -t newest-cni-958000 dig +short host.docker.internal
	I0717 16:24:28.244932   95328 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I0717 16:24:28.245101   95328 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I0717 16:24:28.250091   95328 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 16:24:28.261384   95328 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-958000
	I0717 16:24:28.336094   95328 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0717 16:24:28.357955   95328 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0717 16:24:28.358140   95328 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0717 16:24:28.379477   95328 docker.go:636] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.27.3
	registry.k8s.io/kube-scheduler:v1.27.3
	registry.k8s.io/kube-proxy:v1.27.3
	registry.k8s.io/kube-controller-manager:v1.27.3
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0717 16:24:28.379495   95328 docker.go:566] Images already preloaded, skipping extraction
	I0717 16:24:28.379580   95328 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0717 16:24:28.399456   95328 docker.go:636] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.27.3
	registry.k8s.io/kube-scheduler:v1.27.3
	registry.k8s.io/kube-proxy:v1.27.3
	registry.k8s.io/kube-controller-manager:v1.27.3
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0717 16:24:28.399500   95328 cache_images.go:84] Images are preloaded, skipping loading
	I0717 16:24:28.399603   95328 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0717 16:24:28.453355   95328 cni.go:84] Creating CNI manager for ""
	I0717 16:24:28.453371   95328 cni.go:149] "docker" driver + "docker" runtime found, recommending kindnet
	I0717 16:24:28.453403   95328 kubeadm.go:87] Using pod CIDR: 10.42.0.0/16
	I0717 16:24:28.453422   95328 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.27.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-958000 NodeName:newest-cni-958000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArgs:map[
] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 16:24:28.453605   95328 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "newest-cni-958000"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 16:24:28.453688   95328 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --feature-gates=ServerSideApply=true --hostname-override=newest-cni-958000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.3 ClusterName:newest-cni-958000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0717 16:24:28.453752   95328 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.3
	I0717 16:24:28.463164   95328 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 16:24:28.463233   95328 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 16:24:28.472154   95328 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (415 bytes)
	I0717 16:24:28.490134   95328 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 16:24:28.506922   95328 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I0717 16:24:28.523697   95328 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0717 16:24:28.528274   95328 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 16:24:28.539970   95328 certs.go:56] Setting up /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/newest-cni-958000 for IP: 192.168.67.2
	I0717 16:24:28.539991   95328 certs.go:190] acquiring lock for shared ca certs: {Name:mk8dc1f2afa352f9c2168154d4ab47beda1b6a37 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 16:24:28.540181   95328 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/16899-76867/.minikube/ca.key
	I0717 16:24:28.540272   95328 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/16899-76867/.minikube/proxy-client-ca.key
	I0717 16:24:28.540317   95328 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/newest-cni-958000/client.key
	I0717 16:24:28.540336   95328 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/newest-cni-958000/client.crt with IP's: []
	I0717 16:24:28.670247   95328 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/newest-cni-958000/client.crt ...
	I0717 16:24:28.670264   95328 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/newest-cni-958000/client.crt: {Name:mkccdf17704e8f82c625c1276c35f298a18b341e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 16:24:28.670603   95328 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/newest-cni-958000/client.key ...
	I0717 16:24:28.670611   95328 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/newest-cni-958000/client.key: {Name:mk0a9b32c8f41b43c456e340dc4cc5b0f1e7cd46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 16:24:28.670854   95328 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/newest-cni-958000/apiserver.key.c7fa3a9e
	I0717 16:24:28.670872   95328 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/newest-cni-958000/apiserver.crt.c7fa3a9e with IP's: [192.168.67.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0717 16:24:28.709848   95328 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/newest-cni-958000/apiserver.crt.c7fa3a9e ...
	I0717 16:24:28.709856   95328 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/newest-cni-958000/apiserver.crt.c7fa3a9e: {Name:mkd4d0b6aeef7766cef76928d3ab5dd79f9d0a62 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 16:24:28.710091   95328 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/newest-cni-958000/apiserver.key.c7fa3a9e ...
	I0717 16:24:28.710100   95328 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/newest-cni-958000/apiserver.key.c7fa3a9e: {Name:mkbe4b96700987c1b943fd106bc0bc1639367cb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 16:24:28.710296   95328 certs.go:337] copying /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/newest-cni-958000/apiserver.crt.c7fa3a9e -> /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/newest-cni-958000/apiserver.crt
	I0717 16:24:28.710468   95328 certs.go:341] copying /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/newest-cni-958000/apiserver.key.c7fa3a9e -> /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/newest-cni-958000/apiserver.key
	I0717 16:24:28.710648   95328 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/newest-cni-958000/proxy-client.key
	I0717 16:24:28.710662   95328 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/newest-cni-958000/proxy-client.crt with IP's: []
	I0717 16:24:28.755038   95328 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/newest-cni-958000/proxy-client.crt ...
	I0717 16:24:28.755046   95328 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/newest-cni-958000/proxy-client.crt: {Name:mkc44d9e95be2d1249ecfe2bc5221667ce4eaa19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 16:24:28.755258   95328 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/newest-cni-958000/proxy-client.key ...
	I0717 16:24:28.755268   95328 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/newest-cni-958000/proxy-client.key: {Name:mkdc2188a29c92c163e70ebd5370a8e4e8b1435b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 16:24:28.755650   95328 certs.go:437] found cert: /Users/jenkins/minikube-integration/16899-76867/.minikube/certs/Users/jenkins/minikube-integration/16899-76867/.minikube/certs/77324.pem (1338 bytes)
	W0717 16:24:28.755704   95328 certs.go:433] ignoring /Users/jenkins/minikube-integration/16899-76867/.minikube/certs/Users/jenkins/minikube-integration/16899-76867/.minikube/certs/77324_empty.pem, impossibly tiny 0 bytes
	I0717 16:24:28.755717   95328 certs.go:437] found cert: /Users/jenkins/minikube-integration/16899-76867/.minikube/certs/Users/jenkins/minikube-integration/16899-76867/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 16:24:28.755753   95328 certs.go:437] found cert: /Users/jenkins/minikube-integration/16899-76867/.minikube/certs/Users/jenkins/minikube-integration/16899-76867/.minikube/certs/ca.pem (1078 bytes)
	I0717 16:24:28.755788   95328 certs.go:437] found cert: /Users/jenkins/minikube-integration/16899-76867/.minikube/certs/Users/jenkins/minikube-integration/16899-76867/.minikube/certs/cert.pem (1123 bytes)
	I0717 16:24:28.755818   95328 certs.go:437] found cert: /Users/jenkins/minikube-integration/16899-76867/.minikube/certs/Users/jenkins/minikube-integration/16899-76867/.minikube/certs/key.pem (1675 bytes)
	I0717 16:24:28.755881   95328 certs.go:437] found cert: /Users/jenkins/minikube-integration/16899-76867/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/16899-76867/.minikube/files/etc/ssl/certs/773242.pem (1708 bytes)
	I0717 16:24:28.756409   95328 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/newest-cni-958000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0717 16:24:28.779587   95328 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/newest-cni-958000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0717 16:24:28.802353   95328 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/newest-cni-958000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 16:24:28.824822   95328 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/newest-cni-958000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 16:24:28.848031   95328 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 16:24:28.870829   95328 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 16:24:28.893322   95328 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 16:24:28.915672   95328 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0717 16:24:28.937917   95328 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 16:24:28.960798   95328 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/certs/77324.pem --> /usr/share/ca-certificates/77324.pem (1338 bytes)
	I0717 16:24:28.984223   95328 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/files/etc/ssl/certs/773242.pem --> /usr/share/ca-certificates/773242.pem (1708 bytes)
	I0717 16:24:29.006316   95328 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 16:24:29.023413   95328 ssh_runner.go:195] Run: openssl version
	I0717 16:24:29.029776   95328 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 16:24:29.039503   95328 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 16:24:29.044151   95328 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jul 17 22:07 /usr/share/ca-certificates/minikubeCA.pem
	I0717 16:24:29.044208   95328 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 16:24:29.051434   95328 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 16:24:29.061220   95328 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/77324.pem && ln -fs /usr/share/ca-certificates/77324.pem /etc/ssl/certs/77324.pem"
	I0717 16:24:29.070758   95328 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/77324.pem
	I0717 16:24:29.075437   95328 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jul 17 22:13 /usr/share/ca-certificates/77324.pem
	I0717 16:24:29.075483   95328 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/77324.pem
	I0717 16:24:29.082519   95328 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/77324.pem /etc/ssl/certs/51391683.0"
	I0717 16:24:29.092261   95328 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/773242.pem && ln -fs /usr/share/ca-certificates/773242.pem /etc/ssl/certs/773242.pem"
	I0717 16:24:29.102210   95328 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/773242.pem
	I0717 16:24:29.106482   95328 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jul 17 22:13 /usr/share/ca-certificates/773242.pem
	I0717 16:24:29.106530   95328 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/773242.pem
	I0717 16:24:29.113592   95328 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/773242.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 16:24:29.123358   95328 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0717 16:24:29.128246   95328 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0717 16:24:29.128290   95328 kubeadm.go:404] StartCluster: {Name:newest-cni-958000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:newest-cni-958000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDo
main:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 16:24:29.128399   95328 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0717 16:24:29.147974   95328 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 16:24:29.157266   95328 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 16:24:29.166349   95328 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0717 16:24:29.166440   95328 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 16:24:29.175896   95328 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0717 16:24:29.175922   95328 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0717 16:24:29.222964   95328 kubeadm.go:322] [init] Using Kubernetes version: v1.27.3
	I0717 16:24:29.223209   95328 kubeadm.go:322] [preflight] Running pre-flight checks
	I0717 16:24:29.351673   95328 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0717 16:24:29.351793   95328 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0717 16:24:29.351908   95328 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0717 16:24:29.647651   95328 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0717 16:24:29.669352   95328 out.go:204]   - Generating certificates and keys ...
	I0717 16:24:29.669478   95328 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0717 16:24:29.669536   95328 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0717 16:24:29.838300   95328 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0717 16:24:30.034894   95328 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0717 16:24:30.134262   95328 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0717 16:24:30.232073   95328 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0717 16:24:30.332910   95328 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0717 16:24:30.333037   95328 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-958000] and IPs [192.168.67.2 127.0.0.1 ::1]
	I0717 16:24:30.590112   95328 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0717 16:24:30.590277   95328 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-958000] and IPs [192.168.67.2 127.0.0.1 ::1]
	I0717 16:24:30.702718   95328 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0717 16:24:31.033965   95328 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0717 16:24:31.109226   95328 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0717 16:24:31.109283   95328 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0717 16:24:31.175066   95328 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0717 16:24:31.602108   95328 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0717 16:24:31.746851   95328 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0717 16:24:31.838079   95328 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0717 16:24:31.849870   95328 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0717 16:24:31.850420   95328 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0717 16:24:31.850457   95328 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0717 16:24:31.922733   95328 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0717 16:24:31.944162   95328 out.go:204]   - Booting up control plane ...
	I0717 16:24:31.944245   95328 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0717 16:24:31.944315   95328 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0717 16:24:31.944407   95328 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0717 16:24:31.944522   95328 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0717 16:24:31.944659   95328 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0717 16:24:36.931480   95328 kubeadm.go:322] [apiclient] All control plane components are healthy after 5.002232 seconds
	I0717 16:24:36.931596   95328 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0717 16:24:36.943577   95328 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0717 16:24:37.460844   95328 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0717 16:24:37.461018   95328 kubeadm.go:322] [mark-control-plane] Marking the node newest-cni-958000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0717 16:24:37.968971   95328 kubeadm.go:322] [bootstrap-token] Using token: 9dcv1y.9aq42vn22vo3n7zl
	I0717 16:24:37.992662   95328 out.go:204]   - Configuring RBAC rules ...
	I0717 16:24:37.992848   95328 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0717 16:24:38.034464   95328 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0717 16:24:38.040052   95328 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0717 16:24:38.042286   95328 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0717 16:24:38.044829   95328 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0717 16:24:38.047905   95328 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0717 16:24:38.056292   95328 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0717 16:24:38.247031   95328 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0717 16:24:38.442666   95328 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0717 16:24:38.443100   95328 kubeadm.go:322] 
	I0717 16:24:38.443228   95328 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0717 16:24:38.443248   95328 kubeadm.go:322] 
	I0717 16:24:38.443423   95328 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0717 16:24:38.443435   95328 kubeadm.go:322] 
	I0717 16:24:38.443465   95328 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0717 16:24:38.443537   95328 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0717 16:24:38.443620   95328 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0717 16:24:38.443634   95328 kubeadm.go:322] 
	I0717 16:24:38.443721   95328 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0717 16:24:38.443741   95328 kubeadm.go:322] 
	I0717 16:24:38.443853   95328 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0717 16:24:38.443879   95328 kubeadm.go:322] 
	I0717 16:24:38.443972   95328 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0717 16:24:38.444117   95328 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0717 16:24:38.444190   95328 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0717 16:24:38.444196   95328 kubeadm.go:322] 
	I0717 16:24:38.444294   95328 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0717 16:24:38.444426   95328 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0717 16:24:38.444450   95328 kubeadm.go:322] 
	I0717 16:24:38.444543   95328 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 9dcv1y.9aq42vn22vo3n7zl \
	I0717 16:24:38.444634   95328 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:c91791c43f0837f14029917e389a3c1fbc9f61837a204691a147590f42f47a3b \
	I0717 16:24:38.444657   95328 kubeadm.go:322] 	--control-plane 
	I0717 16:24:38.444664   95328 kubeadm.go:322] 
	I0717 16:24:38.444734   95328 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0717 16:24:38.444742   95328 kubeadm.go:322] 
	I0717 16:24:38.444864   95328 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 9dcv1y.9aq42vn22vo3n7zl \
	I0717 16:24:38.445003   95328 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:c91791c43f0837f14029917e389a3c1fbc9f61837a204691a147590f42f47a3b 
	I0717 16:24:38.448961   95328 kubeadm.go:322] 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I0717 16:24:38.449083   95328 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0717 16:24:38.449093   95328 cni.go:84] Creating CNI manager for ""
	I0717 16:24:38.449130   95328 cni.go:149] "docker" driver + "docker" runtime found, recommending kindnet
	I0717 16:24:38.487682   95328 out.go:177] * Configuring CNI (Container Networking Interface) ...
	
	* 
	* ==> Docker <==
	* Jul 17 23:07:17 old-k8s-version-770000 dockerd[700]: time="2023-07-17T23:07:17.478847024Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jul 17 23:07:17 old-k8s-version-770000 dockerd[700]: time="2023-07-17T23:07:17.516483731Z" level=info msg="Loading containers: done."
	Jul 17 23:07:17 old-k8s-version-770000 dockerd[700]: time="2023-07-17T23:07:17.524970441Z" level=info msg="Docker daemon" commit=4ffc614 graphdriver=overlay2 version=24.0.4
	Jul 17 23:07:17 old-k8s-version-770000 dockerd[700]: time="2023-07-17T23:07:17.525064793Z" level=info msg="Daemon has completed initialization"
	Jul 17 23:07:17 old-k8s-version-770000 dockerd[700]: time="2023-07-17T23:07:17.550938254Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 17 23:07:17 old-k8s-version-770000 systemd[1]: Started Docker Application Container Engine.
	Jul 17 23:07:17 old-k8s-version-770000 dockerd[700]: time="2023-07-17T23:07:17.551209856Z" level=info msg="API listen on [::]:2376"
	Jul 17 23:07:25 old-k8s-version-770000 systemd[1]: Stopping Docker Application Container Engine...
	Jul 17 23:07:25 old-k8s-version-770000 dockerd[700]: time="2023-07-17T23:07:25.289287062Z" level=info msg="Processing signal 'terminated'"
	Jul 17 23:07:25 old-k8s-version-770000 dockerd[700]: time="2023-07-17T23:07:25.290215789Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 17 23:07:25 old-k8s-version-770000 dockerd[700]: time="2023-07-17T23:07:25.290601746Z" level=info msg="Daemon shutdown complete"
	Jul 17 23:07:25 old-k8s-version-770000 dockerd[700]: time="2023-07-17T23:07:25.290721710Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 17 23:07:25 old-k8s-version-770000 systemd[1]: docker.service: Deactivated successfully.
	Jul 17 23:07:25 old-k8s-version-770000 systemd[1]: Stopped Docker Application Container Engine.
	Jul 17 23:07:25 old-k8s-version-770000 systemd[1]: Starting Docker Application Container Engine...
	Jul 17 23:07:25 old-k8s-version-770000 dockerd[922]: time="2023-07-17T23:07:25.347878934Z" level=info msg="Starting up"
	Jul 17 23:07:25 old-k8s-version-770000 dockerd[922]: time="2023-07-17T23:07:25.356497542Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 17 23:07:25 old-k8s-version-770000 dockerd[922]: time="2023-07-17T23:07:25.550910373Z" level=info msg="Loading containers: start."
	Jul 17 23:07:25 old-k8s-version-770000 dockerd[922]: time="2023-07-17T23:07:25.658745716Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jul 17 23:07:25 old-k8s-version-770000 dockerd[922]: time="2023-07-17T23:07:25.695384146Z" level=info msg="Loading containers: done."
	Jul 17 23:07:25 old-k8s-version-770000 dockerd[922]: time="2023-07-17T23:07:25.705046377Z" level=info msg="Docker daemon" commit=4ffc614 graphdriver=overlay2 version=24.0.4
	Jul 17 23:07:25 old-k8s-version-770000 dockerd[922]: time="2023-07-17T23:07:25.705110433Z" level=info msg="Daemon has completed initialization"
	Jul 17 23:07:25 old-k8s-version-770000 dockerd[922]: time="2023-07-17T23:07:25.731708392Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 17 23:07:25 old-k8s-version-770000 dockerd[922]: time="2023-07-17T23:07:25.731827717Z" level=info msg="API listen on [::]:2376"
	Jul 17 23:07:25 old-k8s-version-770000 systemd[1]: Started Docker Application Container Engine.
	
	* 
	* ==> container status <==
	* time="2023-07-17T23:24:41Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/dockershim.sock: connect: no such file or directory\""
	CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> kernel <==
	*  23:24:41 up  7:23,  0 users,  load average: 1.55, 1.08, 1.18
	Linux old-k8s-version-770000 5.15.49-linuxkit-pr #1 SMP Thu May 25 07:17:40 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.2 LTS"
	
	* 
	* ==> kubelet <==
	* Jul 17 23:24:39 old-k8s-version-770000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jul 17 23:24:40 old-k8s-version-770000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 880.
	Jul 17 23:24:40 old-k8s-version-770000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jul 17 23:24:40 old-k8s-version-770000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jul 17 23:24:40 old-k8s-version-770000 kubelet[25968]: I0717 23:24:40.632374   25968 server.go:410] Version: v1.16.0
	Jul 17 23:24:40 old-k8s-version-770000 kubelet[25968]: I0717 23:24:40.632639   25968 plugins.go:100] No cloud provider specified.
	Jul 17 23:24:40 old-k8s-version-770000 kubelet[25968]: I0717 23:24:40.632649   25968 server.go:773] Client rotation is on, will bootstrap in background
	Jul 17 23:24:40 old-k8s-version-770000 kubelet[25968]: I0717 23:24:40.634386   25968 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jul 17 23:24:40 old-k8s-version-770000 kubelet[25968]: W0717 23:24:40.635200   25968 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Jul 17 23:24:40 old-k8s-version-770000 kubelet[25968]: W0717 23:24:40.635265   25968 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Jul 17 23:24:40 old-k8s-version-770000 kubelet[25968]: F0717 23:24:40.635290   25968 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Jul 17 23:24:40 old-k8s-version-770000 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jul 17 23:24:40 old-k8s-version-770000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jul 17 23:24:41 old-k8s-version-770000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 881.
	Jul 17 23:24:41 old-k8s-version-770000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jul 17 23:24:41 old-k8s-version-770000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jul 17 23:24:41 old-k8s-version-770000 kubelet[26085]: I0717 23:24:41.375082   26085 server.go:410] Version: v1.16.0
	Jul 17 23:24:41 old-k8s-version-770000 kubelet[26085]: I0717 23:24:41.375290   26085 plugins.go:100] No cloud provider specified.
	Jul 17 23:24:41 old-k8s-version-770000 kubelet[26085]: I0717 23:24:41.375303   26085 server.go:773] Client rotation is on, will bootstrap in background
	Jul 17 23:24:41 old-k8s-version-770000 kubelet[26085]: I0717 23:24:41.377069   26085 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jul 17 23:24:41 old-k8s-version-770000 kubelet[26085]: W0717 23:24:41.377801   26085 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Jul 17 23:24:41 old-k8s-version-770000 kubelet[26085]: W0717 23:24:41.377876   26085 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Jul 17 23:24:41 old-k8s-version-770000 kubelet[26085]: F0717 23:24:41.377901   26085 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Jul 17 23:24:41 old-k8s-version-770000 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jul 17 23:24:41 old-k8s-version-770000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 16:24:41.187505   95466 logs.go:195] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-770000 -n old-k8s-version-770000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-770000 -n old-k8s-version-770000: exit status 2 (376.685534ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-770000" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.16s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (405.9s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57352/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57352/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57352/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57352/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0717 16:25:27.634689   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/flannel-679000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57352/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0717 16:25:33.968327   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/addons-230000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57352/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0717 16:25:45.849028   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/functional-554000/client.crt: no such file or directory
E0717 16:25:50.503401   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/false-679000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57352/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57352/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57352/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57352/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57352/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57352/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0717 16:26:48.573073   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/enable-default-cni-679000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57352/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57352/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0717 16:27:08.913015   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/functional-554000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57352/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57352/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57352/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57352/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0717 16:27:45.084065   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/default-k8s-diff-port-651000/client.crt: no such file or directory
E0717 16:27:45.089177   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/default-k8s-diff-port-651000/client.crt: no such file or directory
E0717 16:27:45.100120   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/default-k8s-diff-port-651000/client.crt: no such file or directory
E0717 16:27:45.122251   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/default-k8s-diff-port-651000/client.crt: no such file or directory
E0717 16:27:45.164383   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/default-k8s-diff-port-651000/client.crt: no such file or directory
E0717 16:27:45.244732   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/default-k8s-diff-port-651000/client.crt: no such file or directory
E0717 16:27:45.406882   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/default-k8s-diff-port-651000/client.crt: no such file or directory
E0717 16:27:45.728055   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/default-k8s-diff-port-651000/client.crt: no such file or directory
E0717 16:27:46.368582   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/default-k8s-diff-port-651000/client.crt: no such file or directory
E0717 16:27:47.649067   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/default-k8s-diff-port-651000/client.crt: no such file or directory
E0717 16:27:50.211355   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/default-k8s-diff-port-651000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57352/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0717 16:27:55.333775   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/default-k8s-diff-port-651000/client.crt: no such file or directory
E0717 16:28:01.302664   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/no-preload-042000/client.crt: no such file or directory
E0717 16:28:01.806769   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/kindnet-679000/client.crt: no such file or directory
E0717 16:28:02.951177   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/bridge-679000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57352/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0717 16:28:05.575680   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/default-k8s-diff-port-651000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57352/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57352/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0717 16:28:25.735703   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/skaffold-258000/client.crt: no such file or directory
E0717 16:28:26.057373   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/default-k8s-diff-port-651000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57352/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57352/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57352/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0717 16:29:00.944291   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/auto-679000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57352/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0717 16:29:07.059063   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/default-k8s-diff-port-651000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57352/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57352/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0717 16:29:24.352814   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/no-preload-042000/client.crt: no such file or directory
E0717 16:29:27.217197   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/kubenet-679000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57352/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0717 16:29:35.063194   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/custom-flannel-679000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57352/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57352/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57352/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57352/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57352/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0717 16:30:27.640310   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/flannel-679000/client.crt: no such file or directory
E0717 16:30:28.980929   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/default-k8s-diff-port-651000/client.crt: no such file or directory
E0717 16:30:33.974757   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/addons-230000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57352/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57352/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0717 16:30:45.853674   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/functional-554000/client.crt: no such file or directory
E0717 16:30:50.508295   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/false-679000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57352/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57352/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E0717 16:31:08.822610   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/calico-679000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57352/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:57352/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded - error from a previous attempt: EOF
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-770000 -n old-k8s-version-770000
start_stop_delete_test.go:287: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-770000 -n old-k8s-version-770000: exit status 2 (365.367647ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:287: status error: exit status 2 (may be ok)
start_stop_delete_test.go:287: "old-k8s-version-770000" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-770000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-770000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.905µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-770000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-770000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-770000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6129a8b881bae9c9b14658d603684185fa98d048ad62c6ede03346a49e6e2b56",
	        "Created": "2023-07-17T23:01:29.298658175Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1241282,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-07-17T23:07:11.601689472Z",
	            "FinishedAt": "2023-07-17T23:07:08.838461805Z"
	        },
	        "Image": "sha256:c6cc01e6091959400f260dc442708e7c71630b58dab1f7c344cb00926bd84950",
	        "ResolvConfPath": "/var/lib/docker/containers/6129a8b881bae9c9b14658d603684185fa98d048ad62c6ede03346a49e6e2b56/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6129a8b881bae9c9b14658d603684185fa98d048ad62c6ede03346a49e6e2b56/hostname",
	        "HostsPath": "/var/lib/docker/containers/6129a8b881bae9c9b14658d603684185fa98d048ad62c6ede03346a49e6e2b56/hosts",
	        "LogPath": "/var/lib/docker/containers/6129a8b881bae9c9b14658d603684185fa98d048ad62c6ede03346a49e6e2b56/6129a8b881bae9c9b14658d603684185fa98d048ad62c6ede03346a49e6e2b56-json.log",
	        "Name": "/old-k8s-version-770000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "old-k8s-version-770000:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-770000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/a038e5269af2118ee927b0485208b6f3b1d1f1a742907462c43ed3f30ca09e24-init/diff:/var/lib/docker/overlay2/388817d1807139a2b5fe2987f16fc65d58f6720a0b0343097a59eb837a278a0e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a038e5269af2118ee927b0485208b6f3b1d1f1a742907462c43ed3f30ca09e24/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a038e5269af2118ee927b0485208b6f3b1d1f1a742907462c43ed3f30ca09e24/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a038e5269af2118ee927b0485208b6f3b1d1f1a742907462c43ed3f30ca09e24/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-770000",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-770000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-770000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-770000",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-770000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "07494e311db80b6eb897208f0309b8eb9434435b8000ecbc8c45045c67b478ce",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57348"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57349"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57350"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57351"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "57352"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/07494e311db8",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-770000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "6129a8b881ba",
	                        "old-k8s-version-770000"
	                    ],
	                    "NetworkID": "e0b81b03df244d0caf05aedc1b790fca29cd02fdbba810fc90a219bab32afcb3",
	                    "EndpointID": "c032de70445ab8aa7fa6e42f3ed33666738d73429096f11ff6d7816e52abc659",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-770000 -n old-k8s-version-770000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-770000 -n old-k8s-version-770000: exit status 2 (360.264537ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p old-k8s-version-770000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p old-k8s-version-770000 logs -n 25: (1.349090247s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| pause   | -p embed-certs-306000                                  | embed-certs-306000           | jenkins | v1.31.0 | 17 Jul 23 16:16 PDT | 17 Jul 23 16:16 PDT |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p embed-certs-306000                                  | embed-certs-306000           | jenkins | v1.31.0 | 17 Jul 23 16:16 PDT | 17 Jul 23 16:16 PDT |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p embed-certs-306000                                  | embed-certs-306000           | jenkins | v1.31.0 | 17 Jul 23 16:16 PDT | 17 Jul 23 16:16 PDT |
	| delete  | -p embed-certs-306000                                  | embed-certs-306000           | jenkins | v1.31.0 | 17 Jul 23 16:16 PDT | 17 Jul 23 16:16 PDT |
	| delete  | -p                                                     | disable-driver-mounts-278000 | jenkins | v1.31.0 | 17 Jul 23 16:16 PDT | 17 Jul 23 16:16 PDT |
	|         | disable-driver-mounts-278000                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-651000 | jenkins | v1.31.0 | 17 Jul 23 16:16 PDT | 17 Jul 23 16:17 PDT |
	|         | default-k8s-diff-port-651000                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-651000  | default-k8s-diff-port-651000 | jenkins | v1.31.0 | 17 Jul 23 16:17 PDT | 17 Jul 23 16:17 PDT |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-651000 | jenkins | v1.31.0 | 17 Jul 23 16:17 PDT | 17 Jul 23 16:18 PDT |
	|         | default-k8s-diff-port-651000                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-651000       | default-k8s-diff-port-651000 | jenkins | v1.31.0 | 17 Jul 23 16:18 PDT | 17 Jul 23 16:18 PDT |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-651000 | jenkins | v1.31.0 | 17 Jul 23 16:18 PDT | 17 Jul 23 16:23 PDT |
	|         | default-k8s-diff-port-651000                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                           |                              |         |         |                     |                     |
	| ssh     | -p                                                     | default-k8s-diff-port-651000 | jenkins | v1.31.0 | 17 Jul 23 16:24 PDT | 17 Jul 23 16:24 PDT |
	|         | default-k8s-diff-port-651000                           |                              |         |         |                     |                     |
	|         | sudo crictl images -o json                             |                              |         |         |                     |                     |
	| pause   | -p                                                     | default-k8s-diff-port-651000 | jenkins | v1.31.0 | 17 Jul 23 16:24 PDT | 17 Jul 23 16:24 PDT |
	|         | default-k8s-diff-port-651000                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p                                                     | default-k8s-diff-port-651000 | jenkins | v1.31.0 | 17 Jul 23 16:24 PDT | 17 Jul 23 16:24 PDT |
	|         | default-k8s-diff-port-651000                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-651000 | jenkins | v1.31.0 | 17 Jul 23 16:24 PDT | 17 Jul 23 16:24 PDT |
	|         | default-k8s-diff-port-651000                           |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-651000 | jenkins | v1.31.0 | 17 Jul 23 16:24 PDT | 17 Jul 23 16:24 PDT |
	|         | default-k8s-diff-port-651000                           |                              |         |         |                     |                     |
	| start   | -p newest-cni-958000 --memory=2200 --alsologtostderr   | newest-cni-958000            | jenkins | v1.31.0 | 17 Jul 23 16:24 PDT | 17 Jul 23 16:24 PDT |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=docker  --kubernetes-version=v1.27.3          |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-958000             | newest-cni-958000            | jenkins | v1.31.0 | 17 Jul 23 16:24 PDT | 17 Jul 23 16:24 PDT |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-958000                                   | newest-cni-958000            | jenkins | v1.31.0 | 17 Jul 23 16:24 PDT | 17 Jul 23 16:25 PDT |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-958000                  | newest-cni-958000            | jenkins | v1.31.0 | 17 Jul 23 16:25 PDT | 17 Jul 23 16:25 PDT |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-958000 --memory=2200 --alsologtostderr   | newest-cni-958000            | jenkins | v1.31.0 | 17 Jul 23 16:25 PDT | 17 Jul 23 16:25 PDT |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=docker  --kubernetes-version=v1.27.3          |                              |         |         |                     |                     |
	| ssh     | -p newest-cni-958000 sudo                              | newest-cni-958000            | jenkins | v1.31.0 | 17 Jul 23 16:25 PDT | 17 Jul 23 16:25 PDT |
	|         | crictl images -o json                                  |                              |         |         |                     |                     |
	| pause   | -p newest-cni-958000                                   | newest-cni-958000            | jenkins | v1.31.0 | 17 Jul 23 16:25 PDT | 17 Jul 23 16:25 PDT |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-958000                                   | newest-cni-958000            | jenkins | v1.31.0 | 17 Jul 23 16:26 PDT | 17 Jul 23 16:26 PDT |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-958000                                   | newest-cni-958000            | jenkins | v1.31.0 | 17 Jul 23 16:26 PDT | 17 Jul 23 16:26 PDT |
	| delete  | -p newest-cni-958000                                   | newest-cni-958000            | jenkins | v1.31.0 | 17 Jul 23 16:26 PDT | 17 Jul 23 16:26 PDT |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/07/17 16:25:06
	Running on machine: MacOS-Agent-1
	Binary: Built with gc go1.20.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 16:25:06.009437   95547 out.go:296] Setting OutFile to fd 1 ...
	I0717 16:25:06.009596   95547 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 16:25:06.009601   95547 out.go:309] Setting ErrFile to fd 2...
	I0717 16:25:06.009605   95547 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 16:25:06.009788   95547 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/16899-76867/.minikube/bin
	I0717 16:25:06.011309   95547 out.go:303] Setting JSON to false
	I0717 16:25:06.031026   95547 start.go:128] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":26674,"bootTime":1689609632,"procs":432,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0717 16:25:06.031121   95547 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0717 16:25:06.089845   95547 out.go:177] * [newest-cni-958000] minikube v1.31.0 on Darwin 13.4.1
	I0717 16:25:06.110752   95547 out.go:177]   - MINIKUBE_LOCATION=16899
	I0717 16:25:06.110751   95547 notify.go:220] Checking for updates...
	I0717 16:25:06.131899   95547 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16899-76867/kubeconfig
	I0717 16:25:06.152973   95547 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0717 16:25:06.194927   95547 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 16:25:06.216130   95547 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16899-76867/.minikube
	I0717 16:25:06.237662   95547 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 16:25:06.259830   95547 config.go:182] Loaded profile config "newest-cni-958000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0717 16:25:06.260607   95547 driver.go:373] Setting default libvirt URI to qemu:///system
	I0717 16:25:06.316213   95547 docker.go:121] docker version: linux-24.0.2:Docker Desktop 4.21.1 (114176)
	I0717 16:25:06.316332   95547 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 16:25:06.421076   95547 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:64 OomKillDisable:false NGoroutines:68 SystemTime:2023-07-17 23:25:06.408164421 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:5.15.49-linuxkit-pr OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexS
erverAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=built
in name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Pat
h:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.6] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Command line tool for Docker Scout Vendor:Docker Inc. Version:0.16.1]] Warnings:<n
il>}}
	I0717 16:25:06.442961   95547 out.go:177] * Using the docker driver based on existing profile
	I0717 16:25:06.485802   95547 start.go:298] selected driver: docker
	I0717 16:25:06.485854   95547 start.go:880] validating driver "docker" against &{Name:newest-cni-958000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:newest-cni-958000 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNo
deRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 16:25:06.485978   95547 start.go:891] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 16:25:06.490018   95547 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 16:25:06.592476   95547 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:64 OomKillDisable:false NGoroutines:68 SystemTime:2023-07-17 23:25:06.580883045 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:5.15.49-linuxkit-pr OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexS
erverAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=built
in name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Pat
h:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.6] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Command line tool for Docker Scout Vendor:Docker Inc. Version:0.16.1]] Warnings:<n
il>}}
	I0717 16:25:06.592703   95547 start_flags.go:938] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0717 16:25:06.592726   95547 cni.go:84] Creating CNI manager for ""
	I0717 16:25:06.592738   95547 cni.go:149] "docker" driver + "docker" runtime found, recommending kindnet
	I0717 16:25:06.592749   95547 start_flags.go:319] config:
	{Name:newest-cni-958000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:newest-cni-958000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: Networ
kPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString
:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 16:25:06.636142   95547 out.go:177] * Starting control plane node newest-cni-958000 in cluster newest-cni-958000
	I0717 16:25:06.657526   95547 cache.go:122] Beginning downloading kic base image for docker with docker
	I0717 16:25:06.701246   95547 out.go:177] * Pulling base image ...
	I0717 16:25:06.722545   95547 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0717 16:25:06.722538   95547 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon
	I0717 16:25:06.722637   95547 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16899-76867/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-amd64.tar.lz4
	I0717 16:25:06.722662   95547 cache.go:57] Caching tarball of preloaded images
	I0717 16:25:06.722849   95547 preload.go:174] Found /Users/jenkins/minikube-integration/16899-76867/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0717 16:25:06.722871   95547 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0717 16:25:06.723822   95547 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/newest-cni-958000/config.json ...
	I0717 16:25:06.773809   95547 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon, skipping pull
	I0717 16:25:06.773830   95547 cache.go:145] gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 exists in daemon, skipping load
	I0717 16:25:06.773849   95547 cache.go:195] Successfully downloaded all kic artifacts
	I0717 16:25:06.773902   95547 start.go:365] acquiring machines lock for newest-cni-958000: {Name:mke5d528d9e88e8bdafae9a78be680113515a9a9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 16:25:06.773988   95547 start.go:369] acquired machines lock for "newest-cni-958000" in 65.802µs
	I0717 16:25:06.774027   95547 start.go:96] Skipping create...Using existing machine configuration
	I0717 16:25:06.774036   95547 fix.go:54] fixHost starting: 
	I0717 16:25:06.774260   95547 cli_runner.go:164] Run: docker container inspect newest-cni-958000 --format={{.State.Status}}
	I0717 16:25:06.827574   95547 fix.go:102] recreateIfNeeded on newest-cni-958000: state=Stopped err=<nil>
	W0717 16:25:06.827609   95547 fix.go:128] unexpected machine state, will restart: <nil>
	I0717 16:25:06.849431   95547 out.go:177] * Restarting existing docker container for "newest-cni-958000" ...
	I0717 16:25:06.892254   95547 cli_runner.go:164] Run: docker start newest-cni-958000
	I0717 16:25:07.137692   95547 cli_runner.go:164] Run: docker container inspect newest-cni-958000 --format={{.State.Status}}
	I0717 16:25:07.191261   95547 kic.go:426] container "newest-cni-958000" state is running.
	I0717 16:25:07.192963   95547 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-958000
	I0717 16:25:07.249585   95547 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/newest-cni-958000/config.json ...
	I0717 16:25:07.249963   95547 machine.go:88] provisioning docker machine ...
	I0717 16:25:07.249988   95547 ubuntu.go:169] provisioning hostname "newest-cni-958000"
	I0717 16:25:07.250075   95547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-958000
	I0717 16:25:07.309695   95547 main.go:141] libmachine: Using SSH client type: native
	I0717 16:25:07.310266   95547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140d2c0] 0x1410360 <nil>  [] 0s} 127.0.0.1 58402 <nil> <nil>}
	I0717 16:25:07.310286   95547 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-958000 && echo "newest-cni-958000" | sudo tee /etc/hostname
	I0717 16:25:07.311626   95547 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0717 16:25:10.457012   95547 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-958000
	
	I0717 16:25:10.457115   95547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-958000
	I0717 16:25:10.508111   95547 main.go:141] libmachine: Using SSH client type: native
	I0717 16:25:10.508463   95547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140d2c0] 0x1410360 <nil>  [] 0s} 127.0.0.1 58402 <nil> <nil>}
	I0717 16:25:10.508476   95547 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-958000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-958000/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-958000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 16:25:10.637014   95547 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 16:25:10.637034   95547 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/16899-76867/.minikube CaCertPath:/Users/jenkins/minikube-integration/16899-76867/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/16899-76867/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/16899-76867/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/16899-76867/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/16899-76867/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/16899-76867/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/16899-76867/.minikube}
	I0717 16:25:10.637061   95547 ubuntu.go:177] setting up certificates
	I0717 16:25:10.637070   95547 provision.go:83] configureAuth start
	I0717 16:25:10.637143   95547 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-958000
	I0717 16:25:10.688211   95547 provision.go:138] copyHostCerts
	I0717 16:25:10.688327   95547 exec_runner.go:144] found /Users/jenkins/minikube-integration/16899-76867/.minikube/ca.pem, removing ...
	I0717 16:25:10.688340   95547 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/16899-76867/.minikube/ca.pem
	I0717 16:25:10.688433   95547 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16899-76867/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/16899-76867/.minikube/ca.pem (1078 bytes)
	I0717 16:25:10.688654   95547 exec_runner.go:144] found /Users/jenkins/minikube-integration/16899-76867/.minikube/cert.pem, removing ...
	I0717 16:25:10.688661   95547 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/16899-76867/.minikube/cert.pem
	I0717 16:25:10.688722   95547 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16899-76867/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/16899-76867/.minikube/cert.pem (1123 bytes)
	I0717 16:25:10.688885   95547 exec_runner.go:144] found /Users/jenkins/minikube-integration/16899-76867/.minikube/key.pem, removing ...
	I0717 16:25:10.688890   95547 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/16899-76867/.minikube/key.pem
	I0717 16:25:10.688954   95547 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16899-76867/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/16899-76867/.minikube/key.pem (1675 bytes)
	I0717 16:25:10.689095   95547 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/16899-76867/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/16899-76867/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/16899-76867/.minikube/certs/ca-key.pem org=jenkins.newest-cni-958000 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube newest-cni-958000]
	I0717 16:25:10.742105   95547 provision.go:172] copyRemoteCerts
	I0717 16:25:10.742156   95547 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 16:25:10.742207   95547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-958000
	I0717 16:25:10.794450   95547 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58402 SSHKeyPath:/Users/jenkins/minikube-integration/16899-76867/.minikube/machines/newest-cni-958000/id_rsa Username:docker}
	I0717 16:25:10.888372   95547 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 16:25:10.909833   95547 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0717 16:25:10.931296   95547 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 16:25:10.954241   95547 provision.go:86] duration metric: configureAuth took 317.148573ms
	I0717 16:25:10.954254   95547 ubuntu.go:193] setting minikube options for container-runtime
	I0717 16:25:10.954415   95547 config.go:182] Loaded profile config "newest-cni-958000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0717 16:25:10.954486   95547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-958000
	I0717 16:25:11.006980   95547 main.go:141] libmachine: Using SSH client type: native
	I0717 16:25:11.007328   95547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140d2c0] 0x1410360 <nil>  [] 0s} 127.0.0.1 58402 <nil> <nil>}
	I0717 16:25:11.007338   95547 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0717 16:25:11.136352   95547 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0717 16:25:11.136367   95547 ubuntu.go:71] root file system type: overlay
	I0717 16:25:11.136454   95547 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0717 16:25:11.136539   95547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-958000
	I0717 16:25:11.189136   95547 main.go:141] libmachine: Using SSH client type: native
	I0717 16:25:11.189503   95547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140d2c0] 0x1410360 <nil>  [] 0s} 127.0.0.1 58402 <nil> <nil>}
	I0717 16:25:11.189554   95547 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0717 16:25:11.328134   95547 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0717 16:25:11.328248   95547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-958000
	I0717 16:25:11.381008   95547 main.go:141] libmachine: Using SSH client type: native
	I0717 16:25:11.381375   95547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140d2c0] 0x1410360 <nil>  [] 0s} 127.0.0.1 58402 <nil> <nil>}
	I0717 16:25:11.381389   95547 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0717 16:25:11.515613   95547 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 16:25:11.515627   95547 machine.go:91] provisioned docker machine in 4.265587448s
	I0717 16:25:11.515638   95547 start.go:300] post-start starting for "newest-cni-958000" (driver="docker")
	I0717 16:25:11.515648   95547 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 16:25:11.515732   95547 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 16:25:11.515790   95547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-958000
	I0717 16:25:11.568259   95547 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58402 SSHKeyPath:/Users/jenkins/minikube-integration/16899-76867/.minikube/machines/newest-cni-958000/id_rsa Username:docker}
	I0717 16:25:11.662379   95547 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 16:25:11.666476   95547 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0717 16:25:11.666501   95547 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0717 16:25:11.666509   95547 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0717 16:25:11.666514   95547 info.go:137] Remote host: Ubuntu 22.04.2 LTS
	I0717 16:25:11.666522   95547 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16899-76867/.minikube/addons for local assets ...
	I0717 16:25:11.666625   95547 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16899-76867/.minikube/files for local assets ...
	I0717 16:25:11.666772   95547 filesync.go:149] local asset: /Users/jenkins/minikube-integration/16899-76867/.minikube/files/etc/ssl/certs/773242.pem -> 773242.pem in /etc/ssl/certs
	I0717 16:25:11.666950   95547 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 16:25:11.676114   95547 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/files/etc/ssl/certs/773242.pem --> /etc/ssl/certs/773242.pem (1708 bytes)
	I0717 16:25:11.699074   95547 start.go:303] post-start completed in 183.421937ms
	I0717 16:25:11.699155   95547 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 16:25:11.699218   95547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-958000
	I0717 16:25:11.751120   95547 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58402 SSHKeyPath:/Users/jenkins/minikube-integration/16899-76867/.minikube/machines/newest-cni-958000/id_rsa Username:docker}
	I0717 16:25:11.841678   95547 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0717 16:25:11.846889   95547 fix.go:56] fixHost completed within 5.07277014s
	I0717 16:25:11.846903   95547 start.go:83] releasing machines lock for "newest-cni-958000", held for 5.07282584s
	I0717 16:25:11.846978   95547 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-958000
	I0717 16:25:11.899786   95547 ssh_runner.go:195] Run: cat /version.json
	I0717 16:25:11.899793   95547 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 16:25:11.899866   95547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-958000
	I0717 16:25:11.899886   95547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-958000
	I0717 16:25:11.956056   95547 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58402 SSHKeyPath:/Users/jenkins/minikube-integration/16899-76867/.minikube/machines/newest-cni-958000/id_rsa Username:docker}
	I0717 16:25:11.956065   95547 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58402 SSHKeyPath:/Users/jenkins/minikube-integration/16899-76867/.minikube/machines/newest-cni-958000/id_rsa Username:docker}
	I0717 16:25:12.045227   95547 ssh_runner.go:195] Run: systemctl --version
	I0717 16:25:12.156357   95547 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0717 16:25:12.162287   95547 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0717 16:25:12.180338   95547 cni.go:236] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0717 16:25:12.180425   95547 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 16:25:12.190049   95547 cni.go:265] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0717 16:25:12.190063   95547 start.go:466] detecting cgroup driver to use...
	I0717 16:25:12.190078   95547 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0717 16:25:12.190239   95547 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 16:25:12.206358   95547 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0717 16:25:12.216606   95547 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0717 16:25:12.226407   95547 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0717 16:25:12.226473   95547 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0717 16:25:12.236570   95547 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0717 16:25:12.246484   95547 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0717 16:25:12.256358   95547 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0717 16:25:12.266621   95547 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 16:25:12.275927   95547 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0717 16:25:12.285739   95547 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 16:25:12.294620   95547 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 16:25:12.303047   95547 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 16:25:12.380317   95547 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0717 16:25:12.453549   95547 start.go:466] detecting cgroup driver to use...
	I0717 16:25:12.453566   95547 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0717 16:25:12.453641   95547 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0717 16:25:12.466611   95547 cruntime.go:276] skipping containerd shutdown because we are bound to it
	I0717 16:25:12.466696   95547 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0717 16:25:12.480057   95547 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 16:25:12.498435   95547 ssh_runner.go:195] Run: which cri-dockerd
	I0717 16:25:12.503191   95547 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0717 16:25:12.513350   95547 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0717 16:25:12.555466   95547 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0717 16:25:12.673456   95547 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0717 16:25:12.773745   95547 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0717 16:25:12.773763   95547 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0717 16:25:12.791921   95547 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 16:25:12.877882   95547 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0717 16:25:13.154981   95547 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0717 16:25:13.219541   95547 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0717 16:25:13.295019   95547 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0717 16:25:13.363147   95547 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 16:25:13.433926   95547 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0717 16:25:13.447016   95547 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 16:25:13.523091   95547 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0717 16:25:13.603241   95547 start.go:513] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0717 16:25:13.603377   95547 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0717 16:25:13.608402   95547 start.go:534] Will wait 60s for crictl version
	I0717 16:25:13.608470   95547 ssh_runner.go:195] Run: which crictl
	I0717 16:25:13.613017   95547 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 16:25:13.659250   95547 start.go:550] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.4
	RuntimeApiVersion:  v1
	I0717 16:25:13.659347   95547 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0717 16:25:13.684487   95547 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0717 16:25:13.753095   95547 out.go:204] * Preparing Kubernetes v1.27.3 on Docker 24.0.4 ...
	I0717 16:25:13.753268   95547 cli_runner.go:164] Run: docker exec -t newest-cni-958000 dig +short host.docker.internal
	I0717 16:25:13.866872   95547 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I0717 16:25:13.867000   95547 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I0717 16:25:13.872298   95547 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 16:25:13.883304   95547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-958000
	I0717 16:25:13.958196   95547 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0717 16:25:13.981125   95547 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0717 16:25:13.981274   95547 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0717 16:25:14.004068   95547 docker.go:636] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.27.3
	registry.k8s.io/kube-scheduler:v1.27.3
	registry.k8s.io/kube-proxy:v1.27.3
	registry.k8s.io/kube-controller-manager:v1.27.3
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0717 16:25:14.004086   95547 docker.go:566] Images already preloaded, skipping extraction
	I0717 16:25:14.004186   95547 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0717 16:25:14.024670   95547 docker.go:636] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.27.3
	registry.k8s.io/kube-scheduler:v1.27.3
	registry.k8s.io/kube-proxy:v1.27.3
	registry.k8s.io/kube-controller-manager:v1.27.3
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0717 16:25:14.024690   95547 cache_images.go:84] Images are preloaded, skipping loading
	I0717 16:25:14.024801   95547 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0717 16:25:14.077002   95547 cni.go:84] Creating CNI manager for ""
	I0717 16:25:14.077018   95547 cni.go:149] "docker" driver + "docker" runtime found, recommending kindnet
	I0717 16:25:14.077035   95547 kubeadm.go:87] Using pod CIDR: 10.42.0.0/16
	I0717 16:25:14.077063   95547 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.27.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-958000 NodeName:newest-cni-958000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArgs:map[
] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 16:25:14.077185   95547 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "newest-cni-958000"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 16:25:14.077264   95547 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --feature-gates=ServerSideApply=true --hostname-override=newest-cni-958000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.3 ClusterName:newest-cni-958000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0717 16:25:14.077334   95547 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.3
	I0717 16:25:14.086761   95547 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 16:25:14.086820   95547 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 16:25:14.095445   95547 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (415 bytes)
	I0717 16:25:14.112203   95547 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 16:25:14.128880   95547 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I0717 16:25:14.146543   95547 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0717 16:25:14.151422   95547 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 16:25:14.162716   95547 certs.go:56] Setting up /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/newest-cni-958000 for IP: 192.168.67.2
	I0717 16:25:14.162770   95547 certs.go:190] acquiring lock for shared ca certs: {Name:mk8dc1f2afa352f9c2168154d4ab47beda1b6a37 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 16:25:14.163001   95547 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/16899-76867/.minikube/ca.key
	I0717 16:25:14.163059   95547 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/16899-76867/.minikube/proxy-client-ca.key
	I0717 16:25:14.163153   95547 certs.go:315] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/newest-cni-958000/client.key
	I0717 16:25:14.163217   95547 certs.go:315] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/newest-cni-958000/apiserver.key.c7fa3a9e
	I0717 16:25:14.163302   95547 certs.go:315] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/newest-cni-958000/proxy-client.key
	I0717 16:25:14.163503   95547 certs.go:437] found cert: /Users/jenkins/minikube-integration/16899-76867/.minikube/certs/Users/jenkins/minikube-integration/16899-76867/.minikube/certs/77324.pem (1338 bytes)
	W0717 16:25:14.163540   95547 certs.go:433] ignoring /Users/jenkins/minikube-integration/16899-76867/.minikube/certs/Users/jenkins/minikube-integration/16899-76867/.minikube/certs/77324_empty.pem, impossibly tiny 0 bytes
	I0717 16:25:14.163552   95547 certs.go:437] found cert: /Users/jenkins/minikube-integration/16899-76867/.minikube/certs/Users/jenkins/minikube-integration/16899-76867/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 16:25:14.163585   95547 certs.go:437] found cert: /Users/jenkins/minikube-integration/16899-76867/.minikube/certs/Users/jenkins/minikube-integration/16899-76867/.minikube/certs/ca.pem (1078 bytes)
	I0717 16:25:14.163623   95547 certs.go:437] found cert: /Users/jenkins/minikube-integration/16899-76867/.minikube/certs/Users/jenkins/minikube-integration/16899-76867/.minikube/certs/cert.pem (1123 bytes)
	I0717 16:25:14.163663   95547 certs.go:437] found cert: /Users/jenkins/minikube-integration/16899-76867/.minikube/certs/Users/jenkins/minikube-integration/16899-76867/.minikube/certs/key.pem (1675 bytes)
	I0717 16:25:14.163739   95547 certs.go:437] found cert: /Users/jenkins/minikube-integration/16899-76867/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/16899-76867/.minikube/files/etc/ssl/certs/773242.pem (1708 bytes)
	I0717 16:25:14.164307   95547 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/newest-cni-958000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0717 16:25:14.186450   95547 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/newest-cni-958000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0717 16:25:14.207957   95547 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/newest-cni-958000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 16:25:14.230314   95547 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/newest-cni-958000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 16:25:14.252824   95547 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 16:25:14.275629   95547 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 16:25:14.299110   95547 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 16:25:14.322509   95547 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0717 16:25:14.346431   95547 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/files/etc/ssl/certs/773242.pem --> /usr/share/ca-certificates/773242.pem (1708 bytes)
	I0717 16:25:14.370734   95547 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 16:25:14.393755   95547 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/certs/77324.pem --> /usr/share/ca-certificates/77324.pem (1338 bytes)
	I0717 16:25:14.415972   95547 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 16:25:14.432875   95547 ssh_runner.go:195] Run: openssl version
	I0717 16:25:14.439416   95547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/773242.pem && ln -fs /usr/share/ca-certificates/773242.pem /etc/ssl/certs/773242.pem"
	I0717 16:25:14.448847   95547 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/773242.pem
	I0717 16:25:14.453579   95547 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jul 17 22:13 /usr/share/ca-certificates/773242.pem
	I0717 16:25:14.453631   95547 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/773242.pem
	I0717 16:25:14.460543   95547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/773242.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 16:25:14.469676   95547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 16:25:14.479277   95547 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 16:25:14.483930   95547 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jul 17 22:07 /usr/share/ca-certificates/minikubeCA.pem
	I0717 16:25:14.483969   95547 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 16:25:14.490832   95547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 16:25:14.500256   95547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/77324.pem && ln -fs /usr/share/ca-certificates/77324.pem /etc/ssl/certs/77324.pem"
	I0717 16:25:14.509641   95547 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/77324.pem
	I0717 16:25:14.514059   95547 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jul 17 22:13 /usr/share/ca-certificates/77324.pem
	I0717 16:25:14.514107   95547 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/77324.pem
	I0717 16:25:14.521121   95547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/77324.pem /etc/ssl/certs/51391683.0"
	I0717 16:25:14.530596   95547 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0717 16:25:14.535035   95547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 16:25:14.542142   95547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 16:25:14.548992   95547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 16:25:14.556247   95547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 16:25:14.562969   95547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 16:25:14.569910   95547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 16:25:14.577016   95547 kubeadm.go:404] StartCluster: {Name:newest-cni-958000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:newest-cni-958000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDo
main:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Ex
traDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 16:25:14.577138   95547 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0717 16:25:14.597277   95547 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 16:25:14.606836   95547 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0717 16:25:14.606848   95547 kubeadm.go:636] restartCluster start
	I0717 16:25:14.606905   95547 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 16:25:14.615339   95547 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 16:25:14.615416   95547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-958000
	I0717 16:25:14.667716   95547 kubeconfig.go:135] verify returned: extract IP: "newest-cni-958000" does not appear in /Users/jenkins/minikube-integration/16899-76867/kubeconfig
	I0717 16:25:14.667891   95547 kubeconfig.go:146] "newest-cni-958000" context is missing from /Users/jenkins/minikube-integration/16899-76867/kubeconfig - will repair!
	I0717 16:25:14.668222   95547 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16899-76867/kubeconfig: {Name:mk7ebdcff64e7ccd84e22cec95bc3c8ecbf54564 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 16:25:14.669865   95547 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 16:25:14.679195   95547 api_server.go:166] Checking apiserver status ...
	I0717 16:25:14.679301   95547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 16:25:14.689873   95547 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 16:25:15.190616   95547 api_server.go:166] Checking apiserver status ...
	I0717 16:25:15.190756   95547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 16:25:15.203103   95547 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 16:25:15.691413   95547 api_server.go:166] Checking apiserver status ...
	I0717 16:25:15.691529   95547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 16:25:15.703724   95547 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 16:25:16.190732   95547 api_server.go:166] Checking apiserver status ...
	I0717 16:25:16.190895   95547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 16:25:16.203130   95547 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 16:25:16.690058   95547 api_server.go:166] Checking apiserver status ...
	I0717 16:25:16.690232   95547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 16:25:16.702437   95547 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 16:25:17.192048   95547 api_server.go:166] Checking apiserver status ...
	I0717 16:25:17.192249   95547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 16:25:17.204452   95547 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 16:25:17.692056   95547 api_server.go:166] Checking apiserver status ...
	I0717 16:25:17.692243   95547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 16:25:17.704633   95547 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 16:25:18.191011   95547 api_server.go:166] Checking apiserver status ...
	I0717 16:25:18.191119   95547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 16:25:18.202854   95547 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 16:25:18.690250   95547 api_server.go:166] Checking apiserver status ...
	I0717 16:25:18.690393   95547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 16:25:18.702597   95547 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 16:25:19.192051   95547 api_server.go:166] Checking apiserver status ...
	I0717 16:25:19.192236   95547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 16:25:19.204550   95547 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 16:25:19.690110   95547 api_server.go:166] Checking apiserver status ...
	I0717 16:25:19.690222   95547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 16:25:19.702215   95547 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 16:25:20.192068   95547 api_server.go:166] Checking apiserver status ...
	I0717 16:25:20.192289   95547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 16:25:20.204520   95547 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 16:25:20.691333   95547 api_server.go:166] Checking apiserver status ...
	I0717 16:25:20.691498   95547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 16:25:20.704042   95547 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 16:25:21.192110   95547 api_server.go:166] Checking apiserver status ...
	I0717 16:25:21.192288   95547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 16:25:21.205001   95547 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 16:25:21.692106   95547 api_server.go:166] Checking apiserver status ...
	I0717 16:25:21.692271   95547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 16:25:21.704783   95547 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 16:25:22.191610   95547 api_server.go:166] Checking apiserver status ...
	I0717 16:25:22.191785   95547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 16:25:22.203938   95547 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 16:25:22.691211   95547 api_server.go:166] Checking apiserver status ...
	I0717 16:25:22.691358   95547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 16:25:22.703668   95547 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 16:25:23.191296   95547 api_server.go:166] Checking apiserver status ...
	I0717 16:25:23.191347   95547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 16:25:23.202194   95547 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 16:25:23.692152   95547 api_server.go:166] Checking apiserver status ...
	I0717 16:25:23.692362   95547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 16:25:23.704743   95547 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 16:25:24.190257   95547 api_server.go:166] Checking apiserver status ...
	I0717 16:25:24.190415   95547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 16:25:24.202077   95547 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 16:25:24.681029   95547 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0717 16:25:24.681086   95547 kubeadm.go:1128] stopping kube-system containers ...
	I0717 16:25:24.681227   95547 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0717 16:25:24.704553   95547 docker.go:462] Stopping containers: [259b5c6baf46 0ddc1541c22f 3af88245252c ff114647a8a5 9b2697662f0c 43efb788ef54 ea864920cf13 290c651ad910 a9e6d59ad7d3 8301cd465306 f2c1f869f084 b6ee749db4d7 d742b3e72f9a 03a597dde7e8 acd6f4492172 380a5c188865 8e3b99c6893d 82e26adeaf6c]
	I0717 16:25:24.704637   95547 ssh_runner.go:195] Run: docker stop 259b5c6baf46 0ddc1541c22f 3af88245252c ff114647a8a5 9b2697662f0c 43efb788ef54 ea864920cf13 290c651ad910 a9e6d59ad7d3 8301cd465306 f2c1f869f084 b6ee749db4d7 d742b3e72f9a 03a597dde7e8 acd6f4492172 380a5c188865 8e3b99c6893d 82e26adeaf6c
	I0717 16:25:24.724991   95547 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0717 16:25:24.736971   95547 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 16:25:24.745788   95547 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Jul 17 23:24 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Jul 17 23:24 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2007 Jul 17 23:24 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Jul 17 23:24 /etc/kubernetes/scheduler.conf
	
	I0717 16:25:24.745848   95547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 16:25:24.754637   95547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 16:25:24.763302   95547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 16:25:24.771915   95547 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0717 16:25:24.772024   95547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 16:25:24.780929   95547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 16:25:24.789904   95547 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0717 16:25:24.789972   95547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 16:25:24.799813   95547 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 16:25:24.811422   95547 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0717 16:25:24.811436   95547 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 16:25:24.862369   95547 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 16:25:25.194024   95547 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0717 16:25:25.326421   95547 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 16:25:25.380893   95547 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0717 16:25:25.472401   95547 api_server.go:52] waiting for apiserver process to appear ...
	I0717 16:25:25.472527   95547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:25:26.038481   95547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:25:26.538746   95547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:25:26.553153   95547 api_server.go:72] duration metric: took 1.080737378s to wait for apiserver process to appear ...
	I0717 16:25:26.553167   95547 api_server.go:88] waiting for apiserver healthz status ...
	I0717 16:25:26.553177   95547 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:58401/healthz ...
	I0717 16:25:26.554906   95547 api_server.go:269] stopped: https://127.0.0.1:58401/healthz: Get "https://127.0.0.1:58401/healthz": EOF
	I0717 16:25:27.055584   95547 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:58401/healthz ...
	I0717 16:25:28.991380   95547 api_server.go:279] https://127.0.0.1:58401/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 16:25:28.991400   95547 api_server.go:103] status: https://127.0.0.1:58401/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 16:25:28.991411   95547 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:58401/healthz ...
	I0717 16:25:29.045024   95547 api_server.go:279] https://127.0.0.1:58401/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	W0717 16:25:29.045047   95547 api_server.go:103] status: https://127.0.0.1:58401/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	I0717 16:25:29.055087   95547 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:58401/healthz ...
	I0717 16:25:29.063750   95547 api_server.go:279] https://127.0.0.1:58401/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0717 16:25:29.063768   95547 api_server.go:103] status: https://127.0.0.1:58401/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0717 16:25:29.555057   95547 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:58401/healthz ...
	I0717 16:25:29.560348   95547 api_server.go:279] https://127.0.0.1:58401/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0717 16:25:29.560366   95547 api_server.go:103] status: https://127.0.0.1:58401/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0717 16:25:30.056858   95547 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:58401/healthz ...
	I0717 16:25:30.063495   95547 api_server.go:279] https://127.0.0.1:58401/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0717 16:25:30.063517   95547 api_server.go:103] status: https://127.0.0.1:58401/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0717 16:25:30.556898   95547 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:58401/healthz ...
	I0717 16:25:30.565620   95547 api_server.go:279] https://127.0.0.1:58401/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0717 16:25:30.565645   95547 api_server.go:103] status: https://127.0.0.1:58401/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0717 16:25:31.055046   95547 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:58401/healthz ...
	I0717 16:25:31.132299   95547 api_server.go:279] https://127.0.0.1:58401/healthz returned 200:
	ok
	I0717 16:25:31.141094   95547 api_server.go:141] control plane version: v1.27.3
	I0717 16:25:31.141125   95547 api_server.go:131] duration metric: took 4.587880555s to wait for apiserver health ...
	I0717 16:25:31.141151   95547 cni.go:84] Creating CNI manager for ""
	I0717 16:25:31.141159   95547 cni.go:149] "docker" driver + "docker" runtime found, recommending kindnet
	I0717 16:25:31.162303   95547 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0717 16:25:31.199700   95547 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0717 16:25:31.207311   95547 cni.go:188] applying CNI manifest using /var/lib/minikube/binaries/v1.27.3/kubectl ...
	I0717 16:25:31.207323   95547 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0717 16:25:31.225572   95547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0717 16:25:31.874913   95547 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 16:25:31.882378   95547 system_pods.go:59] 9 kube-system pods found
	I0717 16:25:31.882403   95547 system_pods.go:61] "coredns-5d78c9869d-78dd9" [de57e5a7-c7e0-4452-85ff-1a3b1d22f072] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0717 16:25:31.882412   95547 system_pods.go:61] "etcd-newest-cni-958000" [d26f26b2-e584-4db8-b787-9221da3ae2c2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0717 16:25:31.882419   95547 system_pods.go:61] "kindnet-2qwmv" [ef5de39d-c3b1-4c33-a780-1c8b7f590356] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0717 16:25:31.882438   95547 system_pods.go:61] "kube-apiserver-newest-cni-958000" [ef52f413-7df8-4a49-890d-77d96f4b6fe1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0717 16:25:31.882444   95547 system_pods.go:61] "kube-controller-manager-newest-cni-958000" [c43be546-ca87-42c7-89f4-8b4d6bf0a065] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0717 16:25:31.882450   95547 system_pods.go:61] "kube-proxy-vgrgn" [33d990c3-b0c6-4bd3-a8c9-a97793a4d90a] Running
	I0717 16:25:31.882455   95547 system_pods.go:61] "kube-scheduler-newest-cni-958000" [fcbf3fd5-35ae-4169-9731-7efda86a550b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0717 16:25:31.882461   95547 system_pods.go:61] "metrics-server-74d5c6b9c-v6xx7" [758088d8-d032-45f2-8373-0d46b877596f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 16:25:31.882466   95547 system_pods.go:61] "storage-provisioner" [08822a3c-72fd-4c06-abfd-98dcb808d89c] Running
	I0717 16:25:31.882470   95547 system_pods.go:74] duration metric: took 7.544921ms to wait for pod list to return data ...
	I0717 16:25:31.882479   95547 node_conditions.go:102] verifying NodePressure condition ...
	I0717 16:25:31.938022   95547 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I0717 16:25:31.938036   95547 node_conditions.go:123] node cpu capacity is 6
	I0717 16:25:31.938085   95547 node_conditions.go:105] duration metric: took 55.598559ms to run NodePressure ...
	I0717 16:25:31.938111   95547 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 16:25:32.257733   95547 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 16:25:32.267463   95547 ops.go:34] apiserver oom_adj: -16
	I0717 16:25:32.267475   95547 kubeadm.go:640] restartCluster took 17.660339955s
	I0717 16:25:32.267487   95547 kubeadm.go:406] StartCluster complete in 17.690198187s
	I0717 16:25:32.267505   95547 settings.go:142] acquiring lock: {Name:mkcd1c9566f766bc2df0b9039d6e9d173f23ce3a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 16:25:32.267594   95547 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/16899-76867/kubeconfig
	I0717 16:25:32.268218   95547 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16899-76867/kubeconfig: {Name:mk7ebdcff64e7ccd84e22cec95bc3c8ecbf54564 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 16:25:32.268476   95547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0717 16:25:32.268497   95547 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0717 16:25:32.268630   95547 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-958000"
	I0717 16:25:32.268647   95547 addons.go:69] Setting dashboard=true in profile "newest-cni-958000"
	I0717 16:25:32.268658   95547 addons.go:69] Setting default-storageclass=true in profile "newest-cni-958000"
	I0717 16:25:32.268666   95547 addons.go:231] Setting addon storage-provisioner=true in "newest-cni-958000"
	W0717 16:25:32.268674   95547 addons.go:240] addon storage-provisioner should already be in state true
	I0717 16:25:32.268674   95547 addons.go:231] Setting addon dashboard=true in "newest-cni-958000"
	I0717 16:25:32.268679   95547 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-958000"
	W0717 16:25:32.268686   95547 addons.go:240] addon dashboard should already be in state true
	I0717 16:25:32.268654   95547 addons.go:69] Setting metrics-server=true in profile "newest-cni-958000"
	I0717 16:25:32.268722   95547 addons.go:231] Setting addon metrics-server=true in "newest-cni-958000"
	I0717 16:25:32.268731   95547 host.go:66] Checking if "newest-cni-958000" exists ...
	I0717 16:25:32.268741   95547 host.go:66] Checking if "newest-cni-958000" exists ...
	W0717 16:25:32.268734   95547 addons.go:240] addon metrics-server should already be in state true
	I0717 16:25:32.268795   95547 config.go:182] Loaded profile config "newest-cni-958000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0717 16:25:32.268822   95547 host.go:66] Checking if "newest-cni-958000" exists ...
	I0717 16:25:32.269097   95547 cli_runner.go:164] Run: docker container inspect newest-cni-958000 --format={{.State.Status}}
	I0717 16:25:32.269199   95547 cli_runner.go:164] Run: docker container inspect newest-cni-958000 --format={{.State.Status}}
	I0717 16:25:32.269283   95547 cli_runner.go:164] Run: docker container inspect newest-cni-958000 --format={{.State.Status}}
	I0717 16:25:32.269349   95547 cli_runner.go:164] Run: docker container inspect newest-cni-958000 --format={{.State.Status}}
	I0717 16:25:32.281119   95547 kapi.go:248] "coredns" deployment in "kube-system" namespace and "newest-cni-958000" context rescaled to 1 replicas
	I0717 16:25:32.281190   95547 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0717 16:25:32.304896   95547 out.go:177] * Verifying Kubernetes components...
	I0717 16:25:32.345271   95547 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 16:25:32.380257   95547 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 16:25:32.380262   95547 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0717 16:25:32.380221   95547 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0717 16:25:32.368883   95547 addons.go:231] Setting addon default-storageclass=true in "newest-cni-958000"
	I0717 16:25:32.401301   95547 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0717 16:25:32.401371   95547 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	W0717 16:25:32.422071   95547 addons.go:240] addon default-storageclass should already be in state true
	I0717 16:25:32.422071   95547 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 16:25:32.422111   95547 host.go:66] Checking if "newest-cni-958000" exists ...
	I0717 16:25:32.443317   95547 addons.go:423] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0717 16:25:32.443331   95547 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0717 16:25:32.422116   95547 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0717 16:25:32.443356   95547 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0717 16:25:32.443397   95547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-958000
	I0717 16:25:32.443405   95547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-958000
	I0717 16:25:32.443460   95547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-958000
	I0717 16:25:32.447630   95547 cli_runner.go:164] Run: docker container inspect newest-cni-958000 --format={{.State.Status}}
	I0717 16:25:32.457995   95547 start.go:874] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0717 16:25:32.458158   95547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-958000
	I0717 16:25:32.526486   95547 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58402 SSHKeyPath:/Users/jenkins/minikube-integration/16899-76867/.minikube/machines/newest-cni-958000/id_rsa Username:docker}
	I0717 16:25:32.526574   95547 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58402 SSHKeyPath:/Users/jenkins/minikube-integration/16899-76867/.minikube/machines/newest-cni-958000/id_rsa Username:docker}
	I0717 16:25:32.528382   95547 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58402 SSHKeyPath:/Users/jenkins/minikube-integration/16899-76867/.minikube/machines/newest-cni-958000/id_rsa Username:docker}
	I0717 16:25:32.530005   95547 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 16:25:32.530023   95547 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 16:25:32.530137   95547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-958000
	I0717 16:25:32.537419   95547 api_server.go:52] waiting for apiserver process to appear ...
	I0717 16:25:32.537522   95547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:25:32.558771   95547 api_server.go:72] duration metric: took 277.518719ms to wait for apiserver process to appear ...
	I0717 16:25:32.558796   95547 api_server.go:88] waiting for apiserver healthz status ...
	I0717 16:25:32.558819   95547 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:58401/healthz ...
	I0717 16:25:32.568638   95547 api_server.go:279] https://127.0.0.1:58401/healthz returned 200:
	ok
	I0717 16:25:32.571219   95547 api_server.go:141] control plane version: v1.27.3
	I0717 16:25:32.571237   95547 api_server.go:131] duration metric: took 12.431888ms to wait for apiserver health ...
	I0717 16:25:32.571246   95547 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 16:25:32.579738   95547 system_pods.go:59] 9 kube-system pods found
	I0717 16:25:32.579759   95547 system_pods.go:61] "coredns-5d78c9869d-78dd9" [de57e5a7-c7e0-4452-85ff-1a3b1d22f072] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0717 16:25:32.579770   95547 system_pods.go:61] "etcd-newest-cni-958000" [d26f26b2-e584-4db8-b787-9221da3ae2c2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0717 16:25:32.579782   95547 system_pods.go:61] "kindnet-2qwmv" [ef5de39d-c3b1-4c33-a780-1c8b7f590356] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0717 16:25:32.579799   95547 system_pods.go:61] "kube-apiserver-newest-cni-958000" [ef52f413-7df8-4a49-890d-77d96f4b6fe1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0717 16:25:32.579807   95547 system_pods.go:61] "kube-controller-manager-newest-cni-958000" [c43be546-ca87-42c7-89f4-8b4d6bf0a065] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0717 16:25:32.579818   95547 system_pods.go:61] "kube-proxy-vgrgn" [33d990c3-b0c6-4bd3-a8c9-a97793a4d90a] Running
	I0717 16:25:32.579827   95547 system_pods.go:61] "kube-scheduler-newest-cni-958000" [fcbf3fd5-35ae-4169-9731-7efda86a550b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0717 16:25:32.579851   95547 system_pods.go:61] "metrics-server-74d5c6b9c-v6xx7" [758088d8-d032-45f2-8373-0d46b877596f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 16:25:32.579865   95547 system_pods.go:61] "storage-provisioner" [08822a3c-72fd-4c06-abfd-98dcb808d89c] Running
	I0717 16:25:32.579875   95547 system_pods.go:74] duration metric: took 8.621395ms to wait for pod list to return data ...
	I0717 16:25:32.579884   95547 default_sa.go:34] waiting for default service account to be created ...
	I0717 16:25:32.583269   95547 default_sa.go:45] found service account: "default"
	I0717 16:25:32.583283   95547 default_sa.go:55] duration metric: took 3.39363ms for default service account to be created ...
	I0717 16:25:32.583292   95547 kubeadm.go:581] duration metric: took 302.051979ms to wait for : map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] ...
	I0717 16:25:32.583303   95547 node_conditions.go:102] verifying NodePressure condition ...
	I0717 16:25:32.598048   95547 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58402 SSHKeyPath:/Users/jenkins/minikube-integration/16899-76867/.minikube/machines/newest-cni-958000/id_rsa Username:docker}
	I0717 16:25:32.641217   95547 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I0717 16:25:32.641232   95547 node_conditions.go:123] node cpu capacity is 6
	I0717 16:25:32.641245   95547 node_conditions.go:105] duration metric: took 57.936871ms to run NodePressure ...
	I0717 16:25:32.641255   95547 start.go:228] waiting for startup goroutines ...
	I0717 16:25:32.753584   95547 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 16:25:32.754402   95547 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0717 16:25:32.754415   95547 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0717 16:25:32.756489   95547 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 16:25:32.758325   95547 addons.go:423] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0717 16:25:32.758365   95547 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0717 16:25:32.839099   95547 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0717 16:25:32.839121   95547 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0717 16:25:32.841403   95547 addons.go:423] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0717 16:25:32.841417   95547 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0717 16:25:32.866710   95547 addons.go:423] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0717 16:25:32.866725   95547 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0717 16:25:32.868340   95547 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 16:25:32.868355   95547 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0717 16:25:32.948686   95547 addons.go:423] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0717 16:25:32.948700   95547 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0717 16:25:32.949624   95547 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 16:25:32.974331   95547 addons.go:423] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0717 16:25:32.974355   95547 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0717 16:25:33.069706   95547 addons.go:423] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0717 16:25:33.069725   95547 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0717 16:25:33.169084   95547 addons.go:423] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0717 16:25:33.169104   95547 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0717 16:25:33.250984   95547 addons.go:423] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0717 16:25:33.251007   95547 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0717 16:25:33.338269   95547 addons.go:423] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0717 16:25:33.338298   95547 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0717 16:25:33.367104   95547 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0717 16:25:33.894705   95547 addons.go:467] Verifying addon metrics-server=true in "newest-cni-958000"
	I0717 16:25:34.484789   95547 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.117630675s)
	I0717 16:25:34.506638   95547 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-958000 addons enable metrics-server	
	
	
	I0717 16:25:34.526957   95547 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0717 16:25:34.548482   95547 addons.go:502] enable addons completed in 2.279938211s: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0717 16:25:34.548518   95547 start.go:233] waiting for cluster config update ...
	I0717 16:25:34.548529   95547 start.go:242] writing updated cluster config ...
	I0717 16:25:34.569980   95547 ssh_runner.go:195] Run: rm -f paused
	I0717 16:25:34.613358   95547 start.go:578] kubectl: 1.27.2, cluster: 1.27.3 (minor skew: 0)
	I0717 16:25:34.634859   95547 out.go:177] * Done! kubectl is now configured to use "newest-cni-958000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* Jul 17 23:07:17 old-k8s-version-770000 dockerd[700]: time="2023-07-17T23:07:17.478847024Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jul 17 23:07:17 old-k8s-version-770000 dockerd[700]: time="2023-07-17T23:07:17.516483731Z" level=info msg="Loading containers: done."
	Jul 17 23:07:17 old-k8s-version-770000 dockerd[700]: time="2023-07-17T23:07:17.524970441Z" level=info msg="Docker daemon" commit=4ffc614 graphdriver=overlay2 version=24.0.4
	Jul 17 23:07:17 old-k8s-version-770000 dockerd[700]: time="2023-07-17T23:07:17.525064793Z" level=info msg="Daemon has completed initialization"
	Jul 17 23:07:17 old-k8s-version-770000 dockerd[700]: time="2023-07-17T23:07:17.550938254Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 17 23:07:17 old-k8s-version-770000 systemd[1]: Started Docker Application Container Engine.
	Jul 17 23:07:17 old-k8s-version-770000 dockerd[700]: time="2023-07-17T23:07:17.551209856Z" level=info msg="API listen on [::]:2376"
	Jul 17 23:07:25 old-k8s-version-770000 systemd[1]: Stopping Docker Application Container Engine...
	Jul 17 23:07:25 old-k8s-version-770000 dockerd[700]: time="2023-07-17T23:07:25.289287062Z" level=info msg="Processing signal 'terminated'"
	Jul 17 23:07:25 old-k8s-version-770000 dockerd[700]: time="2023-07-17T23:07:25.290215789Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 17 23:07:25 old-k8s-version-770000 dockerd[700]: time="2023-07-17T23:07:25.290601746Z" level=info msg="Daemon shutdown complete"
	Jul 17 23:07:25 old-k8s-version-770000 dockerd[700]: time="2023-07-17T23:07:25.290721710Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 17 23:07:25 old-k8s-version-770000 systemd[1]: docker.service: Deactivated successfully.
	Jul 17 23:07:25 old-k8s-version-770000 systemd[1]: Stopped Docker Application Container Engine.
	Jul 17 23:07:25 old-k8s-version-770000 systemd[1]: Starting Docker Application Container Engine...
	Jul 17 23:07:25 old-k8s-version-770000 dockerd[922]: time="2023-07-17T23:07:25.347878934Z" level=info msg="Starting up"
	Jul 17 23:07:25 old-k8s-version-770000 dockerd[922]: time="2023-07-17T23:07:25.356497542Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 17 23:07:25 old-k8s-version-770000 dockerd[922]: time="2023-07-17T23:07:25.550910373Z" level=info msg="Loading containers: start."
	Jul 17 23:07:25 old-k8s-version-770000 dockerd[922]: time="2023-07-17T23:07:25.658745716Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jul 17 23:07:25 old-k8s-version-770000 dockerd[922]: time="2023-07-17T23:07:25.695384146Z" level=info msg="Loading containers: done."
	Jul 17 23:07:25 old-k8s-version-770000 dockerd[922]: time="2023-07-17T23:07:25.705046377Z" level=info msg="Docker daemon" commit=4ffc614 graphdriver=overlay2 version=24.0.4
	Jul 17 23:07:25 old-k8s-version-770000 dockerd[922]: time="2023-07-17T23:07:25.705110433Z" level=info msg="Daemon has completed initialization"
	Jul 17 23:07:25 old-k8s-version-770000 dockerd[922]: time="2023-07-17T23:07:25.731708392Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 17 23:07:25 old-k8s-version-770000 dockerd[922]: time="2023-07-17T23:07:25.731827717Z" level=info msg="API listen on [::]:2376"
	Jul 17 23:07:25 old-k8s-version-770000 systemd[1]: Started Docker Application Container Engine.
	
	* 
	* ==> container status <==
	* CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
	time="2023-07-17T23:31:26Z" level=fatal msg="unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /var/run/dockershim.sock: connect: no such file or directory\""
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> kernel <==
	*  23:31:27 up  7:30,  0 users,  load average: 0.40, 0.55, 0.91
	Linux old-k8s-version-770000 5.15.49-linuxkit-pr #1 SMP Thu May 25 07:17:40 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.2 LTS"
	
	* 
	* ==> kubelet <==
	* Jul 17 23:31:25 old-k8s-version-770000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jul 17 23:31:26 old-k8s-version-770000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1421.
	Jul 17 23:31:26 old-k8s-version-770000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jul 17 23:31:26 old-k8s-version-770000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jul 17 23:31:26 old-k8s-version-770000 kubelet[32902]: I0717 23:31:26.379737   32902 server.go:410] Version: v1.16.0
	Jul 17 23:31:26 old-k8s-version-770000 kubelet[32902]: I0717 23:31:26.380037   32902 plugins.go:100] No cloud provider specified.
	Jul 17 23:31:26 old-k8s-version-770000 kubelet[32902]: I0717 23:31:26.380050   32902 server.go:773] Client rotation is on, will bootstrap in background
	Jul 17 23:31:26 old-k8s-version-770000 kubelet[32902]: I0717 23:31:26.381757   32902 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jul 17 23:31:26 old-k8s-version-770000 kubelet[32902]: W0717 23:31:26.382686   32902 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Jul 17 23:31:26 old-k8s-version-770000 kubelet[32902]: W0717 23:31:26.382772   32902 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Jul 17 23:31:26 old-k8s-version-770000 kubelet[32902]: F0717 23:31:26.382798   32902 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Jul 17 23:31:26 old-k8s-version-770000 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jul 17 23:31:26 old-k8s-version-770000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Jul 17 23:31:27 old-k8s-version-770000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1422.
	Jul 17 23:31:27 old-k8s-version-770000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Jul 17 23:31:27 old-k8s-version-770000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Jul 17 23:31:27 old-k8s-version-770000 kubelet[33005]: I0717 23:31:27.151002   33005 server.go:410] Version: v1.16.0
	Jul 17 23:31:27 old-k8s-version-770000 kubelet[33005]: I0717 23:31:27.151241   33005 plugins.go:100] No cloud provider specified.
	Jul 17 23:31:27 old-k8s-version-770000 kubelet[33005]: I0717 23:31:27.151252   33005 server.go:773] Client rotation is on, will bootstrap in background
	Jul 17 23:31:27 old-k8s-version-770000 kubelet[33005]: I0717 23:31:27.152839   33005 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Jul 17 23:31:27 old-k8s-version-770000 kubelet[33005]: W0717 23:31:27.153564   33005 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Jul 17 23:31:27 old-k8s-version-770000 kubelet[33005]: W0717 23:31:27.153637   33005 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Jul 17 23:31:27 old-k8s-version-770000 kubelet[33005]: F0717 23:31:27.153660   33005 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Jul 17 23:31:27 old-k8s-version-770000 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Jul 17 23:31:27 old-k8s-version-770000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 16:31:27.100120   95958 logs.go:195] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-770000 -n old-k8s-version-770000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-770000 -n old-k8s-version-770000: exit status 2 (366.226717ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-770000" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (405.90s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (45.8s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p newest-cni-958000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-darwin-amd64 pause -p newest-cni-958000 --alsologtostderr -v=1: (1.214624249s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-958000 -n newest-cni-958000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-958000 -n newest-cni-958000: exit status 2 (16.014332815s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: post-pause apiserver status = "Stopped"; want = "Paused"
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-958000 -n newest-cni-958000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-958000 -n newest-cni-958000: exit status 2 (15.989137458s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p newest-cni-958000 --alsologtostderr -v=1
E0717 16:26:08.817017   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/calico-679000/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-958000 -n newest-cni-958000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-958000 -n newest-cni-958000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect newest-cni-958000
helpers_test.go:235: (dbg) docker inspect newest-cni-958000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2f07d08fcac68bc124198b04526876df655e3bc0c0cb463e4ec900bc7d08970f",
	        "Created": "2023-07-17T23:24:23.225622771Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1325833,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-07-17T23:25:07.13089368Z",
	            "FinishedAt": "2023-07-17T23:25:05.304121825Z"
	        },
	        "Image": "sha256:c6cc01e6091959400f260dc442708e7c71630b58dab1f7c344cb00926bd84950",
	        "ResolvConfPath": "/var/lib/docker/containers/2f07d08fcac68bc124198b04526876df655e3bc0c0cb463e4ec900bc7d08970f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2f07d08fcac68bc124198b04526876df655e3bc0c0cb463e4ec900bc7d08970f/hostname",
	        "HostsPath": "/var/lib/docker/containers/2f07d08fcac68bc124198b04526876df655e3bc0c0cb463e4ec900bc7d08970f/hosts",
	        "LogPath": "/var/lib/docker/containers/2f07d08fcac68bc124198b04526876df655e3bc0c0cb463e4ec900bc7d08970f/2f07d08fcac68bc124198b04526876df655e3bc0c0cb463e4ec900bc7d08970f-json.log",
	        "Name": "/newest-cni-958000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-958000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-958000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/bc067305dc1daa5af8b80f08154a95cc9b1d413de8fa1dd95b1d5f6e6bad23f6-init/diff:/var/lib/docker/overlay2/388817d1807139a2b5fe2987f16fc65d58f6720a0b0343097a59eb837a278a0e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/bc067305dc1daa5af8b80f08154a95cc9b1d413de8fa1dd95b1d5f6e6bad23f6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/bc067305dc1daa5af8b80f08154a95cc9b1d413de8fa1dd95b1d5f6e6bad23f6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/bc067305dc1daa5af8b80f08154a95cc9b1d413de8fa1dd95b1d5f6e6bad23f6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-958000",
	                "Source": "/var/lib/docker/volumes/newest-cni-958000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-958000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-958000",
	                "name.minikube.sigs.k8s.io": "newest-cni-958000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "de349050690f96ccf7829a7b7b7205acc105069488f243c425892eaad4dea234",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58402"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58403"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58404"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58400"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58401"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/de349050690f",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-958000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "2f07d08fcac6",
	                        "newest-cni-958000"
	                    ],
	                    "NetworkID": "762eaeabef634f98c192987f9503cdd053428e8e0cf233912e0851abd54ef938",
	                    "EndpointID": "9b730e06e664033e3b7fd17e273fca24db1c48d1c104c08427de98c7d2577622",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-958000 -n newest-cni-958000
helpers_test.go:244: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p newest-cni-958000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p newest-cni-958000 logs -n 25: (4.240520682s)
helpers_test.go:252: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p embed-certs-306000                                  | embed-certs-306000           | jenkins | v1.31.0 | 17 Jul 23 16:10 PDT | 17 Jul 23 16:16 PDT |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=docker                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                           |                              |         |         |                     |                     |
	| ssh     | -p embed-certs-306000 sudo                             | embed-certs-306000           | jenkins | v1.31.0 | 17 Jul 23 16:16 PDT | 17 Jul 23 16:16 PDT |
	|         | crictl images -o json                                  |                              |         |         |                     |                     |
	| pause   | -p embed-certs-306000                                  | embed-certs-306000           | jenkins | v1.31.0 | 17 Jul 23 16:16 PDT | 17 Jul 23 16:16 PDT |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p embed-certs-306000                                  | embed-certs-306000           | jenkins | v1.31.0 | 17 Jul 23 16:16 PDT | 17 Jul 23 16:16 PDT |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p embed-certs-306000                                  | embed-certs-306000           | jenkins | v1.31.0 | 17 Jul 23 16:16 PDT | 17 Jul 23 16:16 PDT |
	| delete  | -p embed-certs-306000                                  | embed-certs-306000           | jenkins | v1.31.0 | 17 Jul 23 16:16 PDT | 17 Jul 23 16:16 PDT |
	| delete  | -p                                                     | disable-driver-mounts-278000 | jenkins | v1.31.0 | 17 Jul 23 16:16 PDT | 17 Jul 23 16:16 PDT |
	|         | disable-driver-mounts-278000                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-651000 | jenkins | v1.31.0 | 17 Jul 23 16:16 PDT | 17 Jul 23 16:17 PDT |
	|         | default-k8s-diff-port-651000                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-651000  | default-k8s-diff-port-651000 | jenkins | v1.31.0 | 17 Jul 23 16:17 PDT | 17 Jul 23 16:17 PDT |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-651000 | jenkins | v1.31.0 | 17 Jul 23 16:17 PDT | 17 Jul 23 16:18 PDT |
	|         | default-k8s-diff-port-651000                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-651000       | default-k8s-diff-port-651000 | jenkins | v1.31.0 | 17 Jul 23 16:18 PDT | 17 Jul 23 16:18 PDT |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-651000 | jenkins | v1.31.0 | 17 Jul 23 16:18 PDT | 17 Jul 23 16:23 PDT |
	|         | default-k8s-diff-port-651000                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                           |                              |         |         |                     |                     |
	| ssh     | -p                                                     | default-k8s-diff-port-651000 | jenkins | v1.31.0 | 17 Jul 23 16:24 PDT | 17 Jul 23 16:24 PDT |
	|         | default-k8s-diff-port-651000                           |                              |         |         |                     |                     |
	|         | sudo crictl images -o json                             |                              |         |         |                     |                     |
	| pause   | -p                                                     | default-k8s-diff-port-651000 | jenkins | v1.31.0 | 17 Jul 23 16:24 PDT | 17 Jul 23 16:24 PDT |
	|         | default-k8s-diff-port-651000                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p                                                     | default-k8s-diff-port-651000 | jenkins | v1.31.0 | 17 Jul 23 16:24 PDT | 17 Jul 23 16:24 PDT |
	|         | default-k8s-diff-port-651000                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-651000 | jenkins | v1.31.0 | 17 Jul 23 16:24 PDT | 17 Jul 23 16:24 PDT |
	|         | default-k8s-diff-port-651000                           |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-651000 | jenkins | v1.31.0 | 17 Jul 23 16:24 PDT | 17 Jul 23 16:24 PDT |
	|         | default-k8s-diff-port-651000                           |                              |         |         |                     |                     |
	| start   | -p newest-cni-958000 --memory=2200 --alsologtostderr   | newest-cni-958000            | jenkins | v1.31.0 | 17 Jul 23 16:24 PDT | 17 Jul 23 16:24 PDT |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=docker  --kubernetes-version=v1.27.3          |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-958000             | newest-cni-958000            | jenkins | v1.31.0 | 17 Jul 23 16:24 PDT | 17 Jul 23 16:24 PDT |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-958000                                   | newest-cni-958000            | jenkins | v1.31.0 | 17 Jul 23 16:24 PDT | 17 Jul 23 16:25 PDT |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-958000                  | newest-cni-958000            | jenkins | v1.31.0 | 17 Jul 23 16:25 PDT | 17 Jul 23 16:25 PDT |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-958000 --memory=2200 --alsologtostderr   | newest-cni-958000            | jenkins | v1.31.0 | 17 Jul 23 16:25 PDT | 17 Jul 23 16:25 PDT |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=docker  --kubernetes-version=v1.27.3          |                              |         |         |                     |                     |
	| ssh     | -p newest-cni-958000 sudo                              | newest-cni-958000            | jenkins | v1.31.0 | 17 Jul 23 16:25 PDT | 17 Jul 23 16:25 PDT |
	|         | crictl images -o json                                  |                              |         |         |                     |                     |
	| pause   | -p newest-cni-958000                                   | newest-cni-958000            | jenkins | v1.31.0 | 17 Jul 23 16:25 PDT | 17 Jul 23 16:25 PDT |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-958000                                   | newest-cni-958000            | jenkins | v1.31.0 | 17 Jul 23 16:26 PDT | 17 Jul 23 16:26 PDT |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/07/17 16:25:06
	Running on machine: MacOS-Agent-1
	Binary: Built with gc go1.20.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 16:25:06.009437   95547 out.go:296] Setting OutFile to fd 1 ...
	I0717 16:25:06.009596   95547 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 16:25:06.009601   95547 out.go:309] Setting ErrFile to fd 2...
	I0717 16:25:06.009605   95547 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 16:25:06.009788   95547 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/16899-76867/.minikube/bin
	I0717 16:25:06.011309   95547 out.go:303] Setting JSON to false
	I0717 16:25:06.031026   95547 start.go:128] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":26674,"bootTime":1689609632,"procs":432,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0717 16:25:06.031121   95547 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0717 16:25:06.089845   95547 out.go:177] * [newest-cni-958000] minikube v1.31.0 on Darwin 13.4.1
	I0717 16:25:06.110752   95547 out.go:177]   - MINIKUBE_LOCATION=16899
	I0717 16:25:06.110751   95547 notify.go:220] Checking for updates...
	I0717 16:25:06.131899   95547 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16899-76867/kubeconfig
	I0717 16:25:06.152973   95547 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0717 16:25:06.194927   95547 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 16:25:06.216130   95547 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16899-76867/.minikube
	I0717 16:25:06.237662   95547 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 16:25:06.259830   95547 config.go:182] Loaded profile config "newest-cni-958000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0717 16:25:06.260607   95547 driver.go:373] Setting default libvirt URI to qemu:///system
	I0717 16:25:06.316213   95547 docker.go:121] docker version: linux-24.0.2:Docker Desktop 4.21.1 (114176)
	I0717 16:25:06.316332   95547 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 16:25:06.421076   95547 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:64 OomKillDisable:false NGoroutines:68 SystemTime:2023-07-17 23:25:06.408164421 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:5.15.49-linuxkit-pr OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexS
erverAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=built
in name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Pat
h:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.6] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Command line tool for Docker Scout Vendor:Docker Inc. Version:0.16.1]] Warnings:<n
il>}}
	I0717 16:25:06.442961   95547 out.go:177] * Using the docker driver based on existing profile
	I0717 16:25:06.485802   95547 start.go:298] selected driver: docker
	I0717 16:25:06.485854   95547 start.go:880] validating driver "docker" against &{Name:newest-cni-958000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:newest-cni-958000 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNo
deRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 16:25:06.485978   95547 start.go:891] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 16:25:06.490018   95547 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 16:25:06.592476   95547 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:64 OomKillDisable:false NGoroutines:68 SystemTime:2023-07-17 23:25:06.580883045 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:5.15.49-linuxkit-pr OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexS
erverAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=built
in name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Pat
h:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.6] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Command line tool for Docker Scout Vendor:Docker Inc. Version:0.16.1]] Warnings:<n
il>}}
	I0717 16:25:06.592703   95547 start_flags.go:938] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0717 16:25:06.592726   95547 cni.go:84] Creating CNI manager for ""
	I0717 16:25:06.592738   95547 cni.go:149] "docker" driver + "docker" runtime found, recommending kindnet
	I0717 16:25:06.592749   95547 start_flags.go:319] config:
	{Name:newest-cni-958000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:newest-cni-958000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: Networ
kPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString
:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 16:25:06.636142   95547 out.go:177] * Starting control plane node newest-cni-958000 in cluster newest-cni-958000
	I0717 16:25:06.657526   95547 cache.go:122] Beginning downloading kic base image for docker with docker
	I0717 16:25:06.701246   95547 out.go:177] * Pulling base image ...
	I0717 16:25:06.722545   95547 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0717 16:25:06.722538   95547 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon
	I0717 16:25:06.722637   95547 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16899-76867/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-amd64.tar.lz4
	I0717 16:25:06.722662   95547 cache.go:57] Caching tarball of preloaded images
	I0717 16:25:06.722849   95547 preload.go:174] Found /Users/jenkins/minikube-integration/16899-76867/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0717 16:25:06.722871   95547 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0717 16:25:06.723822   95547 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/newest-cni-958000/config.json ...
	I0717 16:25:06.773809   95547 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon, skipping pull
	I0717 16:25:06.773830   95547 cache.go:145] gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 exists in daemon, skipping load
	I0717 16:25:06.773849   95547 cache.go:195] Successfully downloaded all kic artifacts
	I0717 16:25:06.773902   95547 start.go:365] acquiring machines lock for newest-cni-958000: {Name:mke5d528d9e88e8bdafae9a78be680113515a9a9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 16:25:06.773988   95547 start.go:369] acquired machines lock for "newest-cni-958000" in 65.802µs
	I0717 16:25:06.774027   95547 start.go:96] Skipping create...Using existing machine configuration
	I0717 16:25:06.774036   95547 fix.go:54] fixHost starting: 
	I0717 16:25:06.774260   95547 cli_runner.go:164] Run: docker container inspect newest-cni-958000 --format={{.State.Status}}
	I0717 16:25:06.827574   95547 fix.go:102] recreateIfNeeded on newest-cni-958000: state=Stopped err=<nil>
	W0717 16:25:06.827609   95547 fix.go:128] unexpected machine state, will restart: <nil>
	I0717 16:25:06.849431   95547 out.go:177] * Restarting existing docker container for "newest-cni-958000" ...
	I0717 16:25:06.892254   95547 cli_runner.go:164] Run: docker start newest-cni-958000
	I0717 16:25:07.137692   95547 cli_runner.go:164] Run: docker container inspect newest-cni-958000 --format={{.State.Status}}
	I0717 16:25:07.191261   95547 kic.go:426] container "newest-cni-958000" state is running.
	I0717 16:25:07.192963   95547 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-958000
	I0717 16:25:07.249585   95547 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/newest-cni-958000/config.json ...
	I0717 16:25:07.249963   95547 machine.go:88] provisioning docker machine ...
	I0717 16:25:07.249988   95547 ubuntu.go:169] provisioning hostname "newest-cni-958000"
	I0717 16:25:07.250075   95547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-958000
	I0717 16:25:07.309695   95547 main.go:141] libmachine: Using SSH client type: native
	I0717 16:25:07.310266   95547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140d2c0] 0x1410360 <nil>  [] 0s} 127.0.0.1 58402 <nil> <nil>}
	I0717 16:25:07.310286   95547 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-958000 && echo "newest-cni-958000" | sudo tee /etc/hostname
	I0717 16:25:07.311626   95547 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0717 16:25:10.457012   95547 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-958000
	
	I0717 16:25:10.457115   95547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-958000
	I0717 16:25:10.508111   95547 main.go:141] libmachine: Using SSH client type: native
	I0717 16:25:10.508463   95547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140d2c0] 0x1410360 <nil>  [] 0s} 127.0.0.1 58402 <nil> <nil>}
	I0717 16:25:10.508476   95547 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-958000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-958000/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-958000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 16:25:10.637014   95547 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 16:25:10.637034   95547 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/16899-76867/.minikube CaCertPath:/Users/jenkins/minikube-integration/16899-76867/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/16899-76867/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/16899-76867/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/16899-76867/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/16899-76867/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/16899-76867/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/16899-76867/.minikube}
	I0717 16:25:10.637061   95547 ubuntu.go:177] setting up certificates
	I0717 16:25:10.637070   95547 provision.go:83] configureAuth start
	I0717 16:25:10.637143   95547 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-958000
	I0717 16:25:10.688211   95547 provision.go:138] copyHostCerts
	I0717 16:25:10.688327   95547 exec_runner.go:144] found /Users/jenkins/minikube-integration/16899-76867/.minikube/ca.pem, removing ...
	I0717 16:25:10.688340   95547 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/16899-76867/.minikube/ca.pem
	I0717 16:25:10.688433   95547 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16899-76867/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/16899-76867/.minikube/ca.pem (1078 bytes)
	I0717 16:25:10.688654   95547 exec_runner.go:144] found /Users/jenkins/minikube-integration/16899-76867/.minikube/cert.pem, removing ...
	I0717 16:25:10.688661   95547 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/16899-76867/.minikube/cert.pem
	I0717 16:25:10.688722   95547 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16899-76867/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/16899-76867/.minikube/cert.pem (1123 bytes)
	I0717 16:25:10.688885   95547 exec_runner.go:144] found /Users/jenkins/minikube-integration/16899-76867/.minikube/key.pem, removing ...
	I0717 16:25:10.688890   95547 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/16899-76867/.minikube/key.pem
	I0717 16:25:10.688954   95547 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16899-76867/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/16899-76867/.minikube/key.pem (1675 bytes)
	I0717 16:25:10.689095   95547 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/16899-76867/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/16899-76867/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/16899-76867/.minikube/certs/ca-key.pem org=jenkins.newest-cni-958000 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube newest-cni-958000]
	I0717 16:25:10.742105   95547 provision.go:172] copyRemoteCerts
	I0717 16:25:10.742156   95547 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 16:25:10.742207   95547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-958000
	I0717 16:25:10.794450   95547 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58402 SSHKeyPath:/Users/jenkins/minikube-integration/16899-76867/.minikube/machines/newest-cni-958000/id_rsa Username:docker}
	I0717 16:25:10.888372   95547 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 16:25:10.909833   95547 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0717 16:25:10.931296   95547 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 16:25:10.954241   95547 provision.go:86] duration metric: configureAuth took 317.148573ms
	I0717 16:25:10.954254   95547 ubuntu.go:193] setting minikube options for container-runtime
	I0717 16:25:10.954415   95547 config.go:182] Loaded profile config "newest-cni-958000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0717 16:25:10.954486   95547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-958000
	I0717 16:25:11.006980   95547 main.go:141] libmachine: Using SSH client type: native
	I0717 16:25:11.007328   95547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140d2c0] 0x1410360 <nil>  [] 0s} 127.0.0.1 58402 <nil> <nil>}
	I0717 16:25:11.007338   95547 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0717 16:25:11.136352   95547 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0717 16:25:11.136367   95547 ubuntu.go:71] root file system type: overlay
	I0717 16:25:11.136454   95547 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0717 16:25:11.136539   95547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-958000
	I0717 16:25:11.189136   95547 main.go:141] libmachine: Using SSH client type: native
	I0717 16:25:11.189503   95547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140d2c0] 0x1410360 <nil>  [] 0s} 127.0.0.1 58402 <nil> <nil>}
	I0717 16:25:11.189554   95547 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0717 16:25:11.328134   95547 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0717 16:25:11.328248   95547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-958000
	I0717 16:25:11.381008   95547 main.go:141] libmachine: Using SSH client type: native
	I0717 16:25:11.381375   95547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140d2c0] 0x1410360 <nil>  [] 0s} 127.0.0.1 58402 <nil> <nil>}
	I0717 16:25:11.381389   95547 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0717 16:25:11.515613   95547 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 16:25:11.515627   95547 machine.go:91] provisioned docker machine in 4.265587448s
	I0717 16:25:11.515638   95547 start.go:300] post-start starting for "newest-cni-958000" (driver="docker")
	I0717 16:25:11.515648   95547 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 16:25:11.515732   95547 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 16:25:11.515790   95547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-958000
	I0717 16:25:11.568259   95547 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58402 SSHKeyPath:/Users/jenkins/minikube-integration/16899-76867/.minikube/machines/newest-cni-958000/id_rsa Username:docker}
	I0717 16:25:11.662379   95547 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 16:25:11.666476   95547 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0717 16:25:11.666501   95547 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0717 16:25:11.666509   95547 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0717 16:25:11.666514   95547 info.go:137] Remote host: Ubuntu 22.04.2 LTS
	I0717 16:25:11.666522   95547 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16899-76867/.minikube/addons for local assets ...
	I0717 16:25:11.666625   95547 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16899-76867/.minikube/files for local assets ...
	I0717 16:25:11.666772   95547 filesync.go:149] local asset: /Users/jenkins/minikube-integration/16899-76867/.minikube/files/etc/ssl/certs/773242.pem -> 773242.pem in /etc/ssl/certs
	I0717 16:25:11.666950   95547 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 16:25:11.676114   95547 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/files/etc/ssl/certs/773242.pem --> /etc/ssl/certs/773242.pem (1708 bytes)
	I0717 16:25:11.699074   95547 start.go:303] post-start completed in 183.421937ms
	I0717 16:25:11.699155   95547 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 16:25:11.699218   95547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-958000
	I0717 16:25:11.751120   95547 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58402 SSHKeyPath:/Users/jenkins/minikube-integration/16899-76867/.minikube/machines/newest-cni-958000/id_rsa Username:docker}
	I0717 16:25:11.841678   95547 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0717 16:25:11.846889   95547 fix.go:56] fixHost completed within 5.07277014s
	I0717 16:25:11.846903   95547 start.go:83] releasing machines lock for "newest-cni-958000", held for 5.07282584s
	I0717 16:25:11.846978   95547 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-958000
	I0717 16:25:11.899786   95547 ssh_runner.go:195] Run: cat /version.json
	I0717 16:25:11.899793   95547 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 16:25:11.899866   95547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-958000
	I0717 16:25:11.899886   95547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-958000
	I0717 16:25:11.956056   95547 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58402 SSHKeyPath:/Users/jenkins/minikube-integration/16899-76867/.minikube/machines/newest-cni-958000/id_rsa Username:docker}
	I0717 16:25:11.956065   95547 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58402 SSHKeyPath:/Users/jenkins/minikube-integration/16899-76867/.minikube/machines/newest-cni-958000/id_rsa Username:docker}
	I0717 16:25:12.045227   95547 ssh_runner.go:195] Run: systemctl --version
	I0717 16:25:12.156357   95547 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0717 16:25:12.162287   95547 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0717 16:25:12.180338   95547 cni.go:236] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0717 16:25:12.180425   95547 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 16:25:12.190049   95547 cni.go:265] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0717 16:25:12.190063   95547 start.go:466] detecting cgroup driver to use...
	I0717 16:25:12.190078   95547 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0717 16:25:12.190239   95547 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 16:25:12.206358   95547 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0717 16:25:12.216606   95547 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0717 16:25:12.226407   95547 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0717 16:25:12.226473   95547 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0717 16:25:12.236570   95547 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0717 16:25:12.246484   95547 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0717 16:25:12.256358   95547 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0717 16:25:12.266621   95547 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 16:25:12.275927   95547 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0717 16:25:12.285739   95547 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 16:25:12.294620   95547 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 16:25:12.303047   95547 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 16:25:12.380317   95547 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0717 16:25:12.453549   95547 start.go:466] detecting cgroup driver to use...
	I0717 16:25:12.453566   95547 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0717 16:25:12.453641   95547 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0717 16:25:12.466611   95547 cruntime.go:276] skipping containerd shutdown because we are bound to it
	I0717 16:25:12.466696   95547 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0717 16:25:12.480057   95547 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 16:25:12.498435   95547 ssh_runner.go:195] Run: which cri-dockerd
	I0717 16:25:12.503191   95547 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0717 16:25:12.513350   95547 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0717 16:25:12.555466   95547 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0717 16:25:12.673456   95547 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0717 16:25:12.773745   95547 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0717 16:25:12.773763   95547 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0717 16:25:12.791921   95547 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 16:25:12.877882   95547 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0717 16:25:13.154981   95547 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0717 16:25:13.219541   95547 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0717 16:25:13.295019   95547 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0717 16:25:13.363147   95547 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 16:25:13.433926   95547 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0717 16:25:13.447016   95547 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 16:25:13.523091   95547 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0717 16:25:13.603241   95547 start.go:513] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0717 16:25:13.603377   95547 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0717 16:25:13.608402   95547 start.go:534] Will wait 60s for crictl version
	I0717 16:25:13.608470   95547 ssh_runner.go:195] Run: which crictl
	I0717 16:25:13.613017   95547 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 16:25:13.659250   95547 start.go:550] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.4
	RuntimeApiVersion:  v1
	I0717 16:25:13.659347   95547 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0717 16:25:13.684487   95547 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0717 16:25:13.753095   95547 out.go:204] * Preparing Kubernetes v1.27.3 on Docker 24.0.4 ...
	I0717 16:25:13.753268   95547 cli_runner.go:164] Run: docker exec -t newest-cni-958000 dig +short host.docker.internal
	I0717 16:25:13.866872   95547 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I0717 16:25:13.867000   95547 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I0717 16:25:13.872298   95547 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 16:25:13.883304   95547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-958000
	I0717 16:25:13.958196   95547 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0717 16:25:13.981125   95547 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0717 16:25:13.981274   95547 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0717 16:25:14.004068   95547 docker.go:636] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.27.3
	registry.k8s.io/kube-scheduler:v1.27.3
	registry.k8s.io/kube-proxy:v1.27.3
	registry.k8s.io/kube-controller-manager:v1.27.3
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0717 16:25:14.004086   95547 docker.go:566] Images already preloaded, skipping extraction
	I0717 16:25:14.004186   95547 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0717 16:25:14.024670   95547 docker.go:636] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.27.3
	registry.k8s.io/kube-scheduler:v1.27.3
	registry.k8s.io/kube-proxy:v1.27.3
	registry.k8s.io/kube-controller-manager:v1.27.3
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0717 16:25:14.024690   95547 cache_images.go:84] Images are preloaded, skipping loading
	I0717 16:25:14.024801   95547 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0717 16:25:14.077002   95547 cni.go:84] Creating CNI manager for ""
	I0717 16:25:14.077018   95547 cni.go:149] "docker" driver + "docker" runtime found, recommending kindnet
	I0717 16:25:14.077035   95547 kubeadm.go:87] Using pod CIDR: 10.42.0.0/16
	I0717 16:25:14.077063   95547 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.27.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-958000 NodeName:newest-cni-958000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArgs:map[
] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 16:25:14.077185   95547 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "newest-cni-958000"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 16:25:14.077264   95547 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --feature-gates=ServerSideApply=true --hostname-override=newest-cni-958000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.3 ClusterName:newest-cni-958000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0717 16:25:14.077334   95547 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.3
	I0717 16:25:14.086761   95547 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 16:25:14.086820   95547 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 16:25:14.095445   95547 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (415 bytes)
	I0717 16:25:14.112203   95547 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 16:25:14.128880   95547 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I0717 16:25:14.146543   95547 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0717 16:25:14.151422   95547 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 16:25:14.162716   95547 certs.go:56] Setting up /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/newest-cni-958000 for IP: 192.168.67.2
	I0717 16:25:14.162770   95547 certs.go:190] acquiring lock for shared ca certs: {Name:mk8dc1f2afa352f9c2168154d4ab47beda1b6a37 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 16:25:14.163001   95547 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/16899-76867/.minikube/ca.key
	I0717 16:25:14.163059   95547 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/16899-76867/.minikube/proxy-client-ca.key
	I0717 16:25:14.163153   95547 certs.go:315] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/newest-cni-958000/client.key
	I0717 16:25:14.163217   95547 certs.go:315] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/newest-cni-958000/apiserver.key.c7fa3a9e
	I0717 16:25:14.163302   95547 certs.go:315] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/newest-cni-958000/proxy-client.key
	I0717 16:25:14.163503   95547 certs.go:437] found cert: /Users/jenkins/minikube-integration/16899-76867/.minikube/certs/Users/jenkins/minikube-integration/16899-76867/.minikube/certs/77324.pem (1338 bytes)
	W0717 16:25:14.163540   95547 certs.go:433] ignoring /Users/jenkins/minikube-integration/16899-76867/.minikube/certs/Users/jenkins/minikube-integration/16899-76867/.minikube/certs/77324_empty.pem, impossibly tiny 0 bytes
	I0717 16:25:14.163552   95547 certs.go:437] found cert: /Users/jenkins/minikube-integration/16899-76867/.minikube/certs/Users/jenkins/minikube-integration/16899-76867/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 16:25:14.163585   95547 certs.go:437] found cert: /Users/jenkins/minikube-integration/16899-76867/.minikube/certs/Users/jenkins/minikube-integration/16899-76867/.minikube/certs/ca.pem (1078 bytes)
	I0717 16:25:14.163623   95547 certs.go:437] found cert: /Users/jenkins/minikube-integration/16899-76867/.minikube/certs/Users/jenkins/minikube-integration/16899-76867/.minikube/certs/cert.pem (1123 bytes)
	I0717 16:25:14.163663   95547 certs.go:437] found cert: /Users/jenkins/minikube-integration/16899-76867/.minikube/certs/Users/jenkins/minikube-integration/16899-76867/.minikube/certs/key.pem (1675 bytes)
	I0717 16:25:14.163739   95547 certs.go:437] found cert: /Users/jenkins/minikube-integration/16899-76867/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/16899-76867/.minikube/files/etc/ssl/certs/773242.pem (1708 bytes)
	I0717 16:25:14.164307   95547 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/newest-cni-958000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0717 16:25:14.186450   95547 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/newest-cni-958000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0717 16:25:14.207957   95547 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/newest-cni-958000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 16:25:14.230314   95547 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/newest-cni-958000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 16:25:14.252824   95547 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 16:25:14.275629   95547 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 16:25:14.299110   95547 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 16:25:14.322509   95547 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0717 16:25:14.346431   95547 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/files/etc/ssl/certs/773242.pem --> /usr/share/ca-certificates/773242.pem (1708 bytes)
	I0717 16:25:14.370734   95547 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 16:25:14.393755   95547 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/certs/77324.pem --> /usr/share/ca-certificates/77324.pem (1338 bytes)
	I0717 16:25:14.415972   95547 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 16:25:14.432875   95547 ssh_runner.go:195] Run: openssl version
	I0717 16:25:14.439416   95547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/773242.pem && ln -fs /usr/share/ca-certificates/773242.pem /etc/ssl/certs/773242.pem"
	I0717 16:25:14.448847   95547 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/773242.pem
	I0717 16:25:14.453579   95547 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jul 17 22:13 /usr/share/ca-certificates/773242.pem
	I0717 16:25:14.453631   95547 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/773242.pem
	I0717 16:25:14.460543   95547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/773242.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 16:25:14.469676   95547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 16:25:14.479277   95547 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 16:25:14.483930   95547 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jul 17 22:07 /usr/share/ca-certificates/minikubeCA.pem
	I0717 16:25:14.483969   95547 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 16:25:14.490832   95547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 16:25:14.500256   95547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/77324.pem && ln -fs /usr/share/ca-certificates/77324.pem /etc/ssl/certs/77324.pem"
	I0717 16:25:14.509641   95547 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/77324.pem
	I0717 16:25:14.514059   95547 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jul 17 22:13 /usr/share/ca-certificates/77324.pem
	I0717 16:25:14.514107   95547 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/77324.pem
	I0717 16:25:14.521121   95547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/77324.pem /etc/ssl/certs/51391683.0"
	I0717 16:25:14.530596   95547 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0717 16:25:14.535035   95547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 16:25:14.542142   95547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 16:25:14.548992   95547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 16:25:14.556247   95547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 16:25:14.562969   95547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 16:25:14.569910   95547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 16:25:14.577016   95547 kubeadm.go:404] StartCluster: {Name:newest-cni-958000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:newest-cni-958000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDo
main:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Ex
traDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 16:25:14.577138   95547 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0717 16:25:14.597277   95547 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 16:25:14.606836   95547 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0717 16:25:14.606848   95547 kubeadm.go:636] restartCluster start
	I0717 16:25:14.606905   95547 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 16:25:14.615339   95547 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 16:25:14.615416   95547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-958000
	I0717 16:25:14.667716   95547 kubeconfig.go:135] verify returned: extract IP: "newest-cni-958000" does not appear in /Users/jenkins/minikube-integration/16899-76867/kubeconfig
	I0717 16:25:14.667891   95547 kubeconfig.go:146] "newest-cni-958000" context is missing from /Users/jenkins/minikube-integration/16899-76867/kubeconfig - will repair!
	I0717 16:25:14.668222   95547 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16899-76867/kubeconfig: {Name:mk7ebdcff64e7ccd84e22cec95bc3c8ecbf54564 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 16:25:14.669865   95547 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 16:25:14.679195   95547 api_server.go:166] Checking apiserver status ...
	I0717 16:25:14.679301   95547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 16:25:14.689873   95547 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 16:25:15.190616   95547 api_server.go:166] Checking apiserver status ...
	I0717 16:25:15.190756   95547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 16:25:15.203103   95547 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 16:25:15.691413   95547 api_server.go:166] Checking apiserver status ...
	I0717 16:25:15.691529   95547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 16:25:15.703724   95547 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 16:25:16.190732   95547 api_server.go:166] Checking apiserver status ...
	I0717 16:25:16.190895   95547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 16:25:16.203130   95547 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 16:25:16.690058   95547 api_server.go:166] Checking apiserver status ...
	I0717 16:25:16.690232   95547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 16:25:16.702437   95547 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 16:25:17.192048   95547 api_server.go:166] Checking apiserver status ...
	I0717 16:25:17.192249   95547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 16:25:17.204452   95547 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 16:25:17.692056   95547 api_server.go:166] Checking apiserver status ...
	I0717 16:25:17.692243   95547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 16:25:17.704633   95547 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 16:25:18.191011   95547 api_server.go:166] Checking apiserver status ...
	I0717 16:25:18.191119   95547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 16:25:18.202854   95547 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 16:25:18.690250   95547 api_server.go:166] Checking apiserver status ...
	I0717 16:25:18.690393   95547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 16:25:18.702597   95547 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 16:25:19.192051   95547 api_server.go:166] Checking apiserver status ...
	I0717 16:25:19.192236   95547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 16:25:19.204550   95547 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 16:25:19.690110   95547 api_server.go:166] Checking apiserver status ...
	I0717 16:25:19.690222   95547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 16:25:19.702215   95547 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 16:25:20.192068   95547 api_server.go:166] Checking apiserver status ...
	I0717 16:25:20.192289   95547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 16:25:20.204520   95547 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 16:25:20.691333   95547 api_server.go:166] Checking apiserver status ...
	I0717 16:25:20.691498   95547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 16:25:20.704042   95547 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 16:25:21.192110   95547 api_server.go:166] Checking apiserver status ...
	I0717 16:25:21.192288   95547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 16:25:21.205001   95547 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 16:25:21.692106   95547 api_server.go:166] Checking apiserver status ...
	I0717 16:25:21.692271   95547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 16:25:21.704783   95547 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 16:25:22.191610   95547 api_server.go:166] Checking apiserver status ...
	I0717 16:25:22.191785   95547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 16:25:22.203938   95547 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 16:25:22.691211   95547 api_server.go:166] Checking apiserver status ...
	I0717 16:25:22.691358   95547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 16:25:22.703668   95547 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 16:25:23.191296   95547 api_server.go:166] Checking apiserver status ...
	I0717 16:25:23.191347   95547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 16:25:23.202194   95547 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 16:25:23.692152   95547 api_server.go:166] Checking apiserver status ...
	I0717 16:25:23.692362   95547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 16:25:23.704743   95547 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 16:25:24.190257   95547 api_server.go:166] Checking apiserver status ...
	I0717 16:25:24.190415   95547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 16:25:24.202077   95547 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 16:25:24.681029   95547 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0717 16:25:24.681086   95547 kubeadm.go:1128] stopping kube-system containers ...
	I0717 16:25:24.681227   95547 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0717 16:25:24.704553   95547 docker.go:462] Stopping containers: [259b5c6baf46 0ddc1541c22f 3af88245252c ff114647a8a5 9b2697662f0c 43efb788ef54 ea864920cf13 290c651ad910 a9e6d59ad7d3 8301cd465306 f2c1f869f084 b6ee749db4d7 d742b3e72f9a 03a597dde7e8 acd6f4492172 380a5c188865 8e3b99c6893d 82e26adeaf6c]
	I0717 16:25:24.704637   95547 ssh_runner.go:195] Run: docker stop 259b5c6baf46 0ddc1541c22f 3af88245252c ff114647a8a5 9b2697662f0c 43efb788ef54 ea864920cf13 290c651ad910 a9e6d59ad7d3 8301cd465306 f2c1f869f084 b6ee749db4d7 d742b3e72f9a 03a597dde7e8 acd6f4492172 380a5c188865 8e3b99c6893d 82e26adeaf6c
	I0717 16:25:24.724991   95547 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0717 16:25:24.736971   95547 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 16:25:24.745788   95547 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Jul 17 23:24 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Jul 17 23:24 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2007 Jul 17 23:24 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Jul 17 23:24 /etc/kubernetes/scheduler.conf
	
	I0717 16:25:24.745848   95547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 16:25:24.754637   95547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 16:25:24.763302   95547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 16:25:24.771915   95547 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0717 16:25:24.772024   95547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 16:25:24.780929   95547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 16:25:24.789904   95547 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0717 16:25:24.789972   95547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 16:25:24.799813   95547 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 16:25:24.811422   95547 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0717 16:25:24.811436   95547 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 16:25:24.862369   95547 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 16:25:25.194024   95547 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0717 16:25:25.326421   95547 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 16:25:25.380893   95547 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0717 16:25:25.472401   95547 api_server.go:52] waiting for apiserver process to appear ...
	I0717 16:25:25.472527   95547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:25:26.038481   95547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:25:26.538746   95547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:25:26.553153   95547 api_server.go:72] duration metric: took 1.080737378s to wait for apiserver process to appear ...
	I0717 16:25:26.553167   95547 api_server.go:88] waiting for apiserver healthz status ...
	I0717 16:25:26.553177   95547 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:58401/healthz ...
	I0717 16:25:26.554906   95547 api_server.go:269] stopped: https://127.0.0.1:58401/healthz: Get "https://127.0.0.1:58401/healthz": EOF
	I0717 16:25:27.055584   95547 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:58401/healthz ...
	I0717 16:25:28.991380   95547 api_server.go:279] https://127.0.0.1:58401/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 16:25:28.991400   95547 api_server.go:103] status: https://127.0.0.1:58401/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 16:25:28.991411   95547 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:58401/healthz ...
	I0717 16:25:29.045024   95547 api_server.go:279] https://127.0.0.1:58401/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	W0717 16:25:29.045047   95547 api_server.go:103] status: https://127.0.0.1:58401/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	I0717 16:25:29.055087   95547 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:58401/healthz ...
	I0717 16:25:29.063750   95547 api_server.go:279] https://127.0.0.1:58401/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0717 16:25:29.063768   95547 api_server.go:103] status: https://127.0.0.1:58401/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0717 16:25:29.555057   95547 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:58401/healthz ...
	I0717 16:25:29.560348   95547 api_server.go:279] https://127.0.0.1:58401/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0717 16:25:29.560366   95547 api_server.go:103] status: https://127.0.0.1:58401/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0717 16:25:30.056858   95547 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:58401/healthz ...
	I0717 16:25:30.063495   95547 api_server.go:279] https://127.0.0.1:58401/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0717 16:25:30.063517   95547 api_server.go:103] status: https://127.0.0.1:58401/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0717 16:25:30.556898   95547 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:58401/healthz ...
	I0717 16:25:30.565620   95547 api_server.go:279] https://127.0.0.1:58401/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0717 16:25:30.565645   95547 api_server.go:103] status: https://127.0.0.1:58401/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0717 16:25:31.055046   95547 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:58401/healthz ...
	I0717 16:25:31.132299   95547 api_server.go:279] https://127.0.0.1:58401/healthz returned 200:
	ok
	I0717 16:25:31.141094   95547 api_server.go:141] control plane version: v1.27.3
	I0717 16:25:31.141125   95547 api_server.go:131] duration metric: took 4.587880555s to wait for apiserver health ...
	I0717 16:25:31.141151   95547 cni.go:84] Creating CNI manager for ""
	I0717 16:25:31.141159   95547 cni.go:149] "docker" driver + "docker" runtime found, recommending kindnet
	I0717 16:25:31.162303   95547 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0717 16:25:31.199700   95547 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0717 16:25:31.207311   95547 cni.go:188] applying CNI manifest using /var/lib/minikube/binaries/v1.27.3/kubectl ...
	I0717 16:25:31.207323   95547 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0717 16:25:31.225572   95547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0717 16:25:31.874913   95547 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 16:25:31.882378   95547 system_pods.go:59] 9 kube-system pods found
	I0717 16:25:31.882403   95547 system_pods.go:61] "coredns-5d78c9869d-78dd9" [de57e5a7-c7e0-4452-85ff-1a3b1d22f072] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0717 16:25:31.882412   95547 system_pods.go:61] "etcd-newest-cni-958000" [d26f26b2-e584-4db8-b787-9221da3ae2c2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0717 16:25:31.882419   95547 system_pods.go:61] "kindnet-2qwmv" [ef5de39d-c3b1-4c33-a780-1c8b7f590356] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0717 16:25:31.882438   95547 system_pods.go:61] "kube-apiserver-newest-cni-958000" [ef52f413-7df8-4a49-890d-77d96f4b6fe1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0717 16:25:31.882444   95547 system_pods.go:61] "kube-controller-manager-newest-cni-958000" [c43be546-ca87-42c7-89f4-8b4d6bf0a065] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0717 16:25:31.882450   95547 system_pods.go:61] "kube-proxy-vgrgn" [33d990c3-b0c6-4bd3-a8c9-a97793a4d90a] Running
	I0717 16:25:31.882455   95547 system_pods.go:61] "kube-scheduler-newest-cni-958000" [fcbf3fd5-35ae-4169-9731-7efda86a550b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0717 16:25:31.882461   95547 system_pods.go:61] "metrics-server-74d5c6b9c-v6xx7" [758088d8-d032-45f2-8373-0d46b877596f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 16:25:31.882466   95547 system_pods.go:61] "storage-provisioner" [08822a3c-72fd-4c06-abfd-98dcb808d89c] Running
	I0717 16:25:31.882470   95547 system_pods.go:74] duration metric: took 7.544921ms to wait for pod list to return data ...
	I0717 16:25:31.882479   95547 node_conditions.go:102] verifying NodePressure condition ...
	I0717 16:25:31.938022   95547 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I0717 16:25:31.938036   95547 node_conditions.go:123] node cpu capacity is 6
	I0717 16:25:31.938085   95547 node_conditions.go:105] duration metric: took 55.598559ms to run NodePressure ...
	I0717 16:25:31.938111   95547 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 16:25:32.257733   95547 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 16:25:32.267463   95547 ops.go:34] apiserver oom_adj: -16
	I0717 16:25:32.267475   95547 kubeadm.go:640] restartCluster took 17.660339955s
	I0717 16:25:32.267487   95547 kubeadm.go:406] StartCluster complete in 17.690198187s
	I0717 16:25:32.267505   95547 settings.go:142] acquiring lock: {Name:mkcd1c9566f766bc2df0b9039d6e9d173f23ce3a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 16:25:32.267594   95547 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/16899-76867/kubeconfig
	I0717 16:25:32.268218   95547 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16899-76867/kubeconfig: {Name:mk7ebdcff64e7ccd84e22cec95bc3c8ecbf54564 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 16:25:32.268476   95547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0717 16:25:32.268497   95547 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0717 16:25:32.268630   95547 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-958000"
	I0717 16:25:32.268647   95547 addons.go:69] Setting dashboard=true in profile "newest-cni-958000"
	I0717 16:25:32.268658   95547 addons.go:69] Setting default-storageclass=true in profile "newest-cni-958000"
	I0717 16:25:32.268666   95547 addons.go:231] Setting addon storage-provisioner=true in "newest-cni-958000"
	W0717 16:25:32.268674   95547 addons.go:240] addon storage-provisioner should already be in state true
	I0717 16:25:32.268674   95547 addons.go:231] Setting addon dashboard=true in "newest-cni-958000"
	I0717 16:25:32.268679   95547 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-958000"
	W0717 16:25:32.268686   95547 addons.go:240] addon dashboard should already be in state true
	I0717 16:25:32.268654   95547 addons.go:69] Setting metrics-server=true in profile "newest-cni-958000"
	I0717 16:25:32.268722   95547 addons.go:231] Setting addon metrics-server=true in "newest-cni-958000"
	I0717 16:25:32.268731   95547 host.go:66] Checking if "newest-cni-958000" exists ...
	I0717 16:25:32.268741   95547 host.go:66] Checking if "newest-cni-958000" exists ...
	W0717 16:25:32.268734   95547 addons.go:240] addon metrics-server should already be in state true
	I0717 16:25:32.268795   95547 config.go:182] Loaded profile config "newest-cni-958000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0717 16:25:32.268822   95547 host.go:66] Checking if "newest-cni-958000" exists ...
	I0717 16:25:32.269097   95547 cli_runner.go:164] Run: docker container inspect newest-cni-958000 --format={{.State.Status}}
	I0717 16:25:32.269199   95547 cli_runner.go:164] Run: docker container inspect newest-cni-958000 --format={{.State.Status}}
	I0717 16:25:32.269283   95547 cli_runner.go:164] Run: docker container inspect newest-cni-958000 --format={{.State.Status}}
	I0717 16:25:32.269349   95547 cli_runner.go:164] Run: docker container inspect newest-cni-958000 --format={{.State.Status}}
	I0717 16:25:32.281119   95547 kapi.go:248] "coredns" deployment in "kube-system" namespace and "newest-cni-958000" context rescaled to 1 replicas
	I0717 16:25:32.281190   95547 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0717 16:25:32.304896   95547 out.go:177] * Verifying Kubernetes components...
	I0717 16:25:32.345271   95547 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 16:25:32.380257   95547 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 16:25:32.380262   95547 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0717 16:25:32.380221   95547 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0717 16:25:32.368883   95547 addons.go:231] Setting addon default-storageclass=true in "newest-cni-958000"
	I0717 16:25:32.401301   95547 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0717 16:25:32.401371   95547 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	W0717 16:25:32.422071   95547 addons.go:240] addon default-storageclass should already be in state true
	I0717 16:25:32.422071   95547 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 16:25:32.422111   95547 host.go:66] Checking if "newest-cni-958000" exists ...
	I0717 16:25:32.443317   95547 addons.go:423] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0717 16:25:32.443331   95547 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0717 16:25:32.422116   95547 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0717 16:25:32.443356   95547 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0717 16:25:32.443397   95547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-958000
	I0717 16:25:32.443405   95547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-958000
	I0717 16:25:32.443460   95547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-958000
	I0717 16:25:32.447630   95547 cli_runner.go:164] Run: docker container inspect newest-cni-958000 --format={{.State.Status}}
	I0717 16:25:32.457995   95547 start.go:874] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0717 16:25:32.458158   95547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-958000
	I0717 16:25:32.526486   95547 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58402 SSHKeyPath:/Users/jenkins/minikube-integration/16899-76867/.minikube/machines/newest-cni-958000/id_rsa Username:docker}
	I0717 16:25:32.526574   95547 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58402 SSHKeyPath:/Users/jenkins/minikube-integration/16899-76867/.minikube/machines/newest-cni-958000/id_rsa Username:docker}
	I0717 16:25:32.528382   95547 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58402 SSHKeyPath:/Users/jenkins/minikube-integration/16899-76867/.minikube/machines/newest-cni-958000/id_rsa Username:docker}
	I0717 16:25:32.530005   95547 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 16:25:32.530023   95547 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 16:25:32.530137   95547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-958000
	I0717 16:25:32.537419   95547 api_server.go:52] waiting for apiserver process to appear ...
	I0717 16:25:32.537522   95547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:25:32.558771   95547 api_server.go:72] duration metric: took 277.518719ms to wait for apiserver process to appear ...
	I0717 16:25:32.558796   95547 api_server.go:88] waiting for apiserver healthz status ...
	I0717 16:25:32.558819   95547 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:58401/healthz ...
	I0717 16:25:32.568638   95547 api_server.go:279] https://127.0.0.1:58401/healthz returned 200:
	ok
	I0717 16:25:32.571219   95547 api_server.go:141] control plane version: v1.27.3
	I0717 16:25:32.571237   95547 api_server.go:131] duration metric: took 12.431888ms to wait for apiserver health ...
	I0717 16:25:32.571246   95547 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 16:25:32.579738   95547 system_pods.go:59] 9 kube-system pods found
	I0717 16:25:32.579759   95547 system_pods.go:61] "coredns-5d78c9869d-78dd9" [de57e5a7-c7e0-4452-85ff-1a3b1d22f072] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0717 16:25:32.579770   95547 system_pods.go:61] "etcd-newest-cni-958000" [d26f26b2-e584-4db8-b787-9221da3ae2c2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0717 16:25:32.579782   95547 system_pods.go:61] "kindnet-2qwmv" [ef5de39d-c3b1-4c33-a780-1c8b7f590356] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0717 16:25:32.579799   95547 system_pods.go:61] "kube-apiserver-newest-cni-958000" [ef52f413-7df8-4a49-890d-77d96f4b6fe1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0717 16:25:32.579807   95547 system_pods.go:61] "kube-controller-manager-newest-cni-958000" [c43be546-ca87-42c7-89f4-8b4d6bf0a065] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0717 16:25:32.579818   95547 system_pods.go:61] "kube-proxy-vgrgn" [33d990c3-b0c6-4bd3-a8c9-a97793a4d90a] Running
	I0717 16:25:32.579827   95547 system_pods.go:61] "kube-scheduler-newest-cni-958000" [fcbf3fd5-35ae-4169-9731-7efda86a550b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0717 16:25:32.579851   95547 system_pods.go:61] "metrics-server-74d5c6b9c-v6xx7" [758088d8-d032-45f2-8373-0d46b877596f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 16:25:32.579865   95547 system_pods.go:61] "storage-provisioner" [08822a3c-72fd-4c06-abfd-98dcb808d89c] Running
	I0717 16:25:32.579875   95547 system_pods.go:74] duration metric: took 8.621395ms to wait for pod list to return data ...
	I0717 16:25:32.579884   95547 default_sa.go:34] waiting for default service account to be created ...
	I0717 16:25:32.583269   95547 default_sa.go:45] found service account: "default"
	I0717 16:25:32.583283   95547 default_sa.go:55] duration metric: took 3.39363ms for default service account to be created ...
	I0717 16:25:32.583292   95547 kubeadm.go:581] duration metric: took 302.051979ms to wait for : map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] ...
	I0717 16:25:32.583303   95547 node_conditions.go:102] verifying NodePressure condition ...
	I0717 16:25:32.598048   95547 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58402 SSHKeyPath:/Users/jenkins/minikube-integration/16899-76867/.minikube/machines/newest-cni-958000/id_rsa Username:docker}
	I0717 16:25:32.641217   95547 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I0717 16:25:32.641232   95547 node_conditions.go:123] node cpu capacity is 6
	I0717 16:25:32.641245   95547 node_conditions.go:105] duration metric: took 57.936871ms to run NodePressure ...
	I0717 16:25:32.641255   95547 start.go:228] waiting for startup goroutines ...
	I0717 16:25:32.753584   95547 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 16:25:32.754402   95547 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0717 16:25:32.754415   95547 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0717 16:25:32.756489   95547 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 16:25:32.758325   95547 addons.go:423] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0717 16:25:32.758365   95547 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0717 16:25:32.839099   95547 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0717 16:25:32.839121   95547 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0717 16:25:32.841403   95547 addons.go:423] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0717 16:25:32.841417   95547 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0717 16:25:32.866710   95547 addons.go:423] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0717 16:25:32.866725   95547 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0717 16:25:32.868340   95547 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 16:25:32.868355   95547 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0717 16:25:32.948686   95547 addons.go:423] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0717 16:25:32.948700   95547 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0717 16:25:32.949624   95547 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 16:25:32.974331   95547 addons.go:423] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0717 16:25:32.974355   95547 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0717 16:25:33.069706   95547 addons.go:423] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0717 16:25:33.069725   95547 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0717 16:25:33.169084   95547 addons.go:423] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0717 16:25:33.169104   95547 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0717 16:25:33.250984   95547 addons.go:423] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0717 16:25:33.251007   95547 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0717 16:25:33.338269   95547 addons.go:423] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0717 16:25:33.338298   95547 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0717 16:25:33.367104   95547 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0717 16:25:33.894705   95547 addons.go:467] Verifying addon metrics-server=true in "newest-cni-958000"
	I0717 16:25:34.484789   95547 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.117630675s)
	I0717 16:25:34.506638   95547 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-958000 addons enable metrics-server	
	
	
	I0717 16:25:34.526957   95547 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0717 16:25:34.548482   95547 addons.go:502] enable addons completed in 2.279938211s: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0717 16:25:34.548518   95547 start.go:233] waiting for cluster config update ...
	I0717 16:25:34.548529   95547 start.go:242] writing updated cluster config ...
	I0717 16:25:34.569980   95547 ssh_runner.go:195] Run: rm -f paused
	I0717 16:25:34.613358   95547 start.go:578] kubectl: 1.27.2, cluster: 1.27.3 (minor skew: 0)
	I0717 16:25:34.634859   95547 out.go:177] * Done! kubectl is now configured to use "newest-cni-958000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* Jul 17 23:25:34 newest-cni-958000 cri-dockerd[978]: time="2023-07-17T23:25:34Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"metrics-server-74d5c6b9c-v6xx7_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Jul 17 23:25:34 newest-cni-958000 cri-dockerd[978]: time="2023-07-17T23:25:34Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"metrics-server-74d5c6b9c-v6xx7_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Jul 17 23:25:35 newest-cni-958000 dockerd[754]: time="2023-07-17T23:25:35.038565179Z" level=info msg="ignoring event" container=da079162cd3af52f15327e72540d64b13c906d324cd215b74c726883038c6bce module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 17 23:25:35 newest-cni-958000 cri-dockerd[978]: time="2023-07-17T23:25:35Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/95ed576fd291635af1bd974328dca78efdedfa42f535677c78e2875aeaa88004/resolv.conf as [nameserver 10.96.0.10 search kube-system.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jul 17 23:25:35 newest-cni-958000 cri-dockerd[978]: time="2023-07-17T23:25:35Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"metrics-server-74d5c6b9c-v6xx7_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Jul 17 23:25:36 newest-cni-958000 cri-dockerd[978]: time="2023-07-17T23:25:36Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"metrics-server-74d5c6b9c-v6xx7_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Jul 17 23:25:36 newest-cni-958000 dockerd[754]: time="2023-07-17T23:25:36.159975768Z" level=info msg="ignoring event" container=95ed576fd291635af1bd974328dca78efdedfa42f535677c78e2875aeaa88004 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 17 23:25:36 newest-cni-958000 cri-dockerd[978]: W0717 23:25:36.244512     978 logging.go:59] [core] [Server #1] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	Jul 17 23:25:36 newest-cni-958000 cri-dockerd[978]: W0717 23:25:36.659738     978 logging.go:59] [core] [Server #1] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	Jul 17 23:26:00 newest-cni-958000 dockerd[754]: time="2023-07-17T23:26:00.871296003Z" level=info msg="Container failed to exit within 30s of signal 15 - using the force" container=3d53869f74c09680a1ae527a126dc3d12b6fba6768e217cd85bdea82b038d681
	Jul 17 23:26:00 newest-cni-958000 dockerd[754]: time="2023-07-17T23:26:00.891228987Z" level=warning msg="Cannot unpause container 3d53869f74c09680a1ae527a126dc3d12b6fba6768e217cd85bdea82b038d681: cannot resume a stopped container: unknown"
	Jul 17 23:26:00 newest-cni-958000 dockerd[754]: time="2023-07-17T23:26:00.903764341Z" level=info msg="ignoring event" container=3d53869f74c09680a1ae527a126dc3d12b6fba6768e217cd85bdea82b038d681 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 17 23:26:00 newest-cni-958000 cri-dockerd[978]: W0717 23:26:00.917334     978 logging.go:59] [core] [Server #1] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	Jul 17 23:26:09 newest-cni-958000 dockerd[754]: time="2023-07-17T23:26:09.267020140Z" level=info msg="ignoring event" container=2d0fd704caeb3026651e755beddb6a05f2d3a2fad2e8b9588b43745c0c25d924 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 17 23:26:09 newest-cni-958000 cri-dockerd[978]: time="2023-07-17T23:26:09Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.42.0.0/24,},}"
	Jul 17 23:26:10 newest-cni-958000 cri-dockerd[978]: time="2023-07-17T23:26:10Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-5d78c9869d-78dd9_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Jul 17 23:26:11 newest-cni-958000 cri-dockerd[978]: time="2023-07-17T23:26:11Z" level=info msg="Both sandbox container and checkpoint could not be found with id \"da079162cd3af52f15327e72540d64b13c906d324cd215b74c726883038c6bce\". Proceed without further sandbox information."
	Jul 17 23:26:11 newest-cni-958000 cri-dockerd[978]: time="2023-07-17T23:26:11Z" level=info msg="Both sandbox container and checkpoint could not be found with id \"9032b1942e842d20f2ded9cad0fbb1c3adf2449dbd0384be221a4495b0f9ab3c\". Proceed without further sandbox information."
	Jul 17 23:26:11 newest-cni-958000 cri-dockerd[978]: time="2023-07-17T23:26:11Z" level=info msg="Both sandbox container and checkpoint could not be found with id \"259b5c6baf46b357c1f03685b9bb76a50ec59bb380749ecd052e5dddb7ed72b9\". Proceed without further sandbox information."
	Jul 17 23:26:11 newest-cni-958000 dockerd[754]: time="2023-07-17T23:26:11.158519987Z" level=info msg="ignoring event" container=e56a04d8c11f13393d19806ca37bda316d020a73e9344f37eafb77cad81b7c76 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 17 23:26:11 newest-cni-958000 cri-dockerd[978]: time="2023-07-17T23:26:11Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/45d6dca7e44a993a0acc113e844ea29383a5a51be185cae833102d5aaab45237/resolv.conf as [nameserver 10.96.0.10 search kubernetes-dashboard.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jul 17 23:26:11 newest-cni-958000 cri-dockerd[978]: time="2023-07-17T23:26:11Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/86e1c46ff1c55b227d36ce3eb60a8bdb0e4260aa64b155545c25759bdeabee70/resolv.conf as [nameserver 10.96.0.10 search kubernetes-dashboard.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jul 17 23:26:11 newest-cni-958000 cri-dockerd[978]: time="2023-07-17T23:26:11Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/910479620edf8dabf7a625d485714683f2ea088239c702e2f9966eff21a479a7/resolv.conf as [nameserver 10.96.0.10 search kube-system.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jul 17 23:26:11 newest-cni-958000 cri-dockerd[978]: time="2023-07-17T23:26:11Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/c7c1a6a950e6c77f46b8fad421f8b39fc702edf250db5ac334687cab0a724d0c/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Jul 17 23:26:11 newest-cni-958000 dockerd[754]: time="2023-07-17T23:26:11.974035565Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                      CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	43a70c51d79c6       6e38f40d628db                                                                              1 second ago         Running             storage-provisioner       2                   791e47bc9be49       storage-provisioner
	86d37656adc59       ead0a4a53df89                                                                              1 second ago         Running             coredns                   2                   c7c1a6a950e6c       coredns-5d78c9869d-78dd9
	67cb39ad20197       kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974   39 seconds ago       Running             kindnet-cni               0                   83660f73fc15e       kindnet-2qwmv
	3d53869f74c09       ead0a4a53df89                                                                              42 seconds ago       Exited              coredns                   1                   e56a04d8c11f1       coredns-5d78c9869d-78dd9
	2d0fd704caeb3       6e38f40d628db                                                                              42 seconds ago       Exited              storage-provisioner       1                   791e47bc9be49       storage-provisioner
	e0b6a3ada1e11       5780543258cf0                                                                              43 seconds ago       Running             kube-proxy                1                   3903e0220a290       kube-proxy-vgrgn
	777fc3ba407a6       86b6af7dd652c                                                                              46 seconds ago       Running             etcd                      1                   1989fe329f866       etcd-newest-cni-958000
	200d42a5c64f0       41697ceeb70b3                                                                              46 seconds ago       Running             kube-scheduler            1                   7edfd46cd1675       kube-scheduler-newest-cni-958000
	8a15f393d56ff       7cffc01dba0e1                                                                              46 seconds ago       Running             kube-controller-manager   1                   dde7a0b9d7a65       kube-controller-manager-newest-cni-958000
	7c126a6bfb9fe       08a0c939e61b7                                                                              46 seconds ago       Running             kube-apiserver            1                   47fdf1ecbd4e9       kube-apiserver-newest-cni-958000
	9b2697662f0c9       ead0a4a53df89                                                                              About a minute ago   Exited              coredns                   0                   ea864920cf134       coredns-5d78c9869d-6l7gl
	290c651ad9103       5780543258cf0                                                                              About a minute ago   Exited              kube-proxy                0                   a9e6d59ad7d37       kube-proxy-vgrgn
	f2c1f869f084f       86b6af7dd652c                                                                              About a minute ago   Exited              etcd                      0                   82e26adeaf6c6       etcd-newest-cni-958000
	b6ee749db4d7a       08a0c939e61b7                                                                              About a minute ago   Exited              kube-apiserver            0                   8e3b99c6893dd       kube-apiserver-newest-cni-958000
	d742b3e72f9ac       7cffc01dba0e1                                                                              About a minute ago   Exited              kube-controller-manager   0                   380a5c188865d       kube-controller-manager-newest-cni-958000
	03a597dde7e87       41697ceeb70b3                                                                              About a minute ago   Exited              kube-scheduler            0                   acd6f44921724       kube-scheduler-newest-cni-958000
	
	* 
	* ==> coredns [3d53869f74c0] <==
	* [INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = f869070685748660180df1b7a47d58cdafcf2f368266578c062d1151dc2c900964aecc5975e8882e6de6fdfb6460463e30ebfaad2ec8f0c3c6436f80225b3b5b
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> coredns [86d37656adc5] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = f869070685748660180df1b7a47d58cdafcf2f368266578c062d1151dc2c900964aecc5975e8882e6de6fdfb6460463e30ebfaad2ec8f0c3c6436f80225b3b5b
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:33190 - 18215 "HINFO IN 2316843258821924969.4346406040297016947. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.008529913s
	
	* 
	* ==> coredns [9b2697662f0c] <==
	* [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = f869070685748660180df1b7a47d58cdafcf2f368266578c062d1151dc2c900964aecc5975e8882e6de6fdfb6460463e30ebfaad2ec8f0c3c6436f80225b3b5b
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] plugin/health: Going into lameduck mode for 5s
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: network is unreachable
	[INFO] 127.0.0.1:33625 - 1270 "HINFO IN 75311783510755995.4415874585306800306. udp 55 false 512" - - 0 5.000053693s
	[ERROR] plugin/errors: 2 75311783510755995.4415874585306800306. HINFO: dial udp 192.168.65.254:53: connect: network is unreachable
	
	* 
	* ==> describe nodes <==
	* Name:               newest-cni-958000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-958000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b76e7e219387ed29a8027b03764cb35e04d80ac8
	                    minikube.k8s.io/name=newest-cni-958000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_07_17T16_24_39_0700
	                    minikube.k8s.io/version=v1.31.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 17 Jul 2023 23:24:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-958000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 17 Jul 2023 23:26:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 17 Jul 2023 23:26:09 +0000   Mon, 17 Jul 2023 23:24:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 17 Jul 2023 23:26:09 +0000   Mon, 17 Jul 2023 23:24:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 17 Jul 2023 23:26:09 +0000   Mon, 17 Jul 2023 23:24:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 17 Jul 2023 23:26:09 +0000   Mon, 17 Jul 2023 23:24:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.67.2
	  Hostname:    newest-cni-958000
	Capacity:
	  cpu:                6
	  ephemeral-storage:  61202244Ki
	  hugepages-2Mi:      0
	  memory:             6085660Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  61202244Ki
	  hugepages-2Mi:      0
	  memory:             6085660Ki
	  pods:               110
	System Info:
	  Machine ID:                 322c480d41584a4a8b6e62cada18398e
	  System UUID:                322c480d41584a4a8b6e62cada18398e
	  Boot ID:                    39ad526a-f9da-4327-9b2d-183cb5a85afa
	  Kernel Version:             5.15.49-linuxkit-pr
	  OS Image:                   Ubuntu 22.04.2 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.4
	  Kubelet Version:            v1.27.3
	  Kube-Proxy Version:         v1.27.3
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5d78c9869d-78dd9                      100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (2%!)(MISSING)     82s
	  kube-system                 etcd-newest-cni-958000                        100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         95s
	  kube-system                 kindnet-2qwmv                                 100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      83s
	  kube-system                 kube-apiserver-newest-cni-958000              250m (4%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         95s
	  kube-system                 kube-controller-manager-newest-cni-958000     200m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         95s
	  kube-system                 kube-proxy-vgrgn                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         83s
	  kube-system                 kube-scheduler-newest-cni-958000              100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         95s
	  kube-system                 metrics-server-74d5c6b9c-v6xx7                100m (1%!)(MISSING)     0 (0%!)(MISSING)      200Mi (3%!)(MISSING)       0 (0%!)(MISSING)         80s
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         81s
	  kubernetes-dashboard        dashboard-metrics-scraper-59c665bc77-z5fjd    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4s
	  kubernetes-dashboard        kubernetes-dashboard-5c5cfc8747-b8qmr         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (15%!)(MISSING)  100m (1%!)(MISSING)
	  memory             420Mi (7%!)(MISSING)  220Mi (3%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 42s                kube-proxy       
	  Normal  Starting                 81s                kube-proxy       
	  Normal  Starting                 95s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  95s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  95s                kubelet          Node newest-cni-958000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    95s                kubelet          Node newest-cni-958000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     95s                kubelet          Node newest-cni-958000 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           83s                node-controller  Node newest-cni-958000 event: Registered Node newest-cni-958000 in Controller
	  Normal  Starting                 48s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  48s (x8 over 48s)  kubelet          Node newest-cni-958000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    48s (x8 over 48s)  kubelet          Node newest-cni-958000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     48s (x7 over 48s)  kubelet          Node newest-cni-958000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  48s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4s                 node-controller  Node newest-cni-958000 event: Registered Node newest-cni-958000 in Controller
	  Normal  Starting                 4s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4s                 kubelet          Node newest-cni-958000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4s                 kubelet          Node newest-cni-958000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4s                 kubelet          Node newest-cni-958000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4s                 kubelet          Updated Node Allocatable limit across pods
	
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> etcd [777fc3ba407a] <==
	* {"level":"info","ts":"2023-07-17T23:25:26.462Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-17T23:25:26.462Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-17T23:25:26.464Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-07-17T23:25:26.464Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"8688e899f7831fc7","initial-advertise-peer-urls":["https://192.168.67.2:2380"],"listen-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.67.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-07-17T23:25:26.464Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-07-17T23:25:26.464Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2023-07-17T23:25:26.464Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2023-07-17T23:25:28.051Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 is starting a new election at term 2"}
	{"level":"info","ts":"2023-07-17T23:25:28.051Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 2"}
	{"level":"info","ts":"2023-07-17T23:25:28.051Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2023-07-17T23:25:28.051Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 3"}
	{"level":"info","ts":"2023-07-17T23:25:28.051Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 3"}
	{"level":"info","ts":"2023-07-17T23:25:28.051Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 3"}
	{"level":"info","ts":"2023-07-17T23:25:28.051Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 3"}
	{"level":"info","ts":"2023-07-17T23:25:28.054Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:newest-cni-958000 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2023-07-17T23:25:28.054Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-07-17T23:25:28.054Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-07-17T23:25:28.054Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-07-17T23:25:28.054Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-07-17T23:25:28.055Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.67.2:2379"}
	{"level":"info","ts":"2023-07-17T23:25:28.055Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	WARNING: 2023/07/17 23:25:36 [core] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"info","ts":"2023-07-17T23:26:11.598Z","caller":"traceutil/trace.go:171","msg":"trace[1659958405] transaction","detail":"{read_only:false; response_revision:600; number_of_response:1; }","duration":"128.203479ms","start":"2023-07-17T23:26:11.470Z","end":"2023-07-17T23:26:11.598Z","steps":["trace[1659958405] 'process raft request'  (duration: 127.968235ms)"],"step_count":1}
	{"level":"info","ts":"2023-07-17T23:26:13.172Z","caller":"traceutil/trace.go:171","msg":"trace[389432424] transaction","detail":"{read_only:false; response_revision:616; number_of_response:1; }","duration":"111.388732ms","start":"2023-07-17T23:26:13.061Z","end":"2023-07-17T23:26:13.172Z","steps":["trace[389432424] 'process raft request'  (duration: 111.349537ms)"],"step_count":1}
	{"level":"info","ts":"2023-07-17T23:26:13.173Z","caller":"traceutil/trace.go:171","msg":"trace[45499688] transaction","detail":"{read_only:false; response_revision:615; number_of_response:1; }","duration":"114.525682ms","start":"2023-07-17T23:26:13.058Z","end":"2023-07-17T23:26:13.173Z","steps":["trace[45499688] 'process raft request'  (duration: 112.404717ms)"],"step_count":1}
	
	* 
	* ==> etcd [f2c1f869f084] <==
	* {"level":"info","ts":"2023-07-17T23:24:34.148Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-07-17T23:24:34.148Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 1"}
	{"level":"info","ts":"2023-07-17T23:24:34.148Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 2"}
	{"level":"info","ts":"2023-07-17T23:24:34.148Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2023-07-17T23:24:34.148Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 2"}
	{"level":"info","ts":"2023-07-17T23:24:34.148Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2023-07-17T23:24:34.150Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:newest-cni-958000 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2023-07-17T23:24:34.150Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-07-17T23:24:34.150Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-07-17T23:24:34.150Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-07-17T23:24:34.150Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-07-17T23:24:34.150Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-17T23:24:34.151Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-17T23:24:34.151Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-17T23:24:34.151Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-17T23:24:34.151Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-07-17T23:24:34.151Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.67.2:2379"}
	{"level":"info","ts":"2023-07-17T23:24:51.137Z","caller":"traceutil/trace.go:171","msg":"trace[602594232] transaction","detail":"{read_only:false; response_revision:341; number_of_response:1; }","duration":"101.718196ms","start":"2023-07-17T23:24:51.035Z","end":"2023-07-17T23:24:51.137Z","steps":["trace[602594232] 'process raft request'  (duration: 99.917385ms)"],"step_count":1}
	{"level":"info","ts":"2023-07-17T23:24:51.137Z","caller":"traceutil/trace.go:171","msg":"trace[421865934] transaction","detail":"{read_only:false; response_revision:342; number_of_response:1; }","duration":"100.77849ms","start":"2023-07-17T23:24:51.036Z","end":"2023-07-17T23:24:51.137Z","steps":["trace[421865934] 'process raft request'  (duration: 100.32683ms)"],"step_count":1}
	{"level":"info","ts":"2023-07-17T23:24:54.565Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2023-07-17T23:24:54.565Z","caller":"embed/etcd.go:373","msg":"closing etcd server","name":"newest-cni-958000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"]}
	{"level":"info","ts":"2023-07-17T23:24:54.646Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"8688e899f7831fc7","current-leader-member-id":"8688e899f7831fc7"}
	{"level":"info","ts":"2023-07-17T23:24:54.647Z","caller":"embed/etcd.go:568","msg":"stopping serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2023-07-17T23:24:54.648Z","caller":"embed/etcd.go:573","msg":"stopped serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2023-07-17T23:24:54.648Z","caller":"embed/etcd.go:375","msg":"closed etcd server","name":"newest-cni-958000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"]}
	
	* 
	* ==> kernel <==
	*  23:26:13 up  7:25,  0 users,  load average: 1.50, 1.18, 1.21
	Linux newest-cni-958000 5.15.49-linuxkit-pr #1 SMP Thu May 25 07:17:40 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.2 LTS"
	
	* 
	* ==> kindnet [67cb39ad2019] <==
	* I0717 23:25:33.439688       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0717 23:25:33.439762       1 main.go:107] hostIP = 192.168.67.2
	podIP = 192.168.67.2
	I0717 23:25:33.439899       1 main.go:116] setting mtu 65535 for CNI 
	I0717 23:25:33.439938       1 main.go:146] kindnetd IP family: "ipv4"
	I0717 23:25:33.439954       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0717 23:25:34.037155       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0717 23:25:34.037196       1 main.go:227] handling current node
	I0717 23:26:09.246563       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0717 23:26:09.246587       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [7c126a6bfb9f] <==
	* W0717 23:25:30.136097       1 handler_proxy.go:100] no RequestInfo found in the context
	E0717 23:25:30.136132       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0717 23:25:30.136140       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0717 23:25:30.136160       1 handler_proxy.go:100] no RequestInfo found in the context
	E0717 23:25:30.136216       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0717 23:25:30.137209       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0717 23:25:31.868117       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0717 23:25:32.077892       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0717 23:25:32.141060       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0717 23:25:32.143582       1 handler_discovery.go:325] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.102.206.80:443: connect: connection refused
	I0717 23:25:32.143596       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0717 23:25:32.239473       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0717 23:25:32.247517       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0717 23:25:34.337572       1 controller.go:624] quota admission added evaluator for: namespaces
	I0717 23:25:34.466480       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs=map[IPv4:10.111.14.12]
	I0717 23:25:34.479074       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs=map[IPv4:10.105.157.73]
	E0717 23:25:36.093193       1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout
	E0717 23:25:36.093240       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0717 23:25:36.093275       1 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 9.867µs, panicked: false, err: context canceled, panic-reason: <nil>
	E0717 23:25:36.094570       1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0717 23:25:36.095810       1 timeout.go:142] post-timeout activity - time-elapsed: 2.660057ms, PATCH "/api/v1/namespaces/kube-system/events/metrics-server-74d5c6b9c-v6xx7.1772cb4a33b59d8d" result: <nil>
	I0717 23:26:09.450927       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0717 23:26:09.543764       1 controller.go:624] quota admission added evaluator for: endpoints
	I0717 23:26:09.580601       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	* 
	* ==> kube-apiserver [b6ee749db4d7] <==
	* }. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 23:24:55.574232       1 logging.go:59] [core] [Channel #124 SubChannel #125] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 23:24:55.574248       1 logging.go:59] [core] [Channel #145 SubChannel #146] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 23:24:55.636582       1 logging.go:59] [core] [Channel #37 SubChannel #38] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	
	* 
	* ==> kube-controller-manager [8a15f393d56f] <==
	* I0717 23:26:09.536387       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-59c665bc77" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-59c665bc77-z5fjd"
	I0717 23:26:09.540341       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"newest-cni-958000\" does not exist"
	I0717 23:26:09.560083       1 shared_informer.go:318] Caches are synced for node
	I0717 23:26:09.560322       1 range_allocator.go:174] "Sending events to api server"
	I0717 23:26:09.560453       1 range_allocator.go:178] "Starting range CIDR allocator"
	I0717 23:26:09.560463       1 shared_informer.go:311] Waiting for caches to sync for cidrallocator
	I0717 23:26:09.560468       1 shared_informer.go:318] Caches are synced for cidrallocator
	I0717 23:26:09.561486       1 shared_informer.go:318] Caches are synced for GC
	I0717 23:26:09.563788       1 shared_informer.go:318] Caches are synced for daemon sets
	I0717 23:26:09.568378       1 shared_informer.go:318] Caches are synced for attach detach
	I0717 23:26:09.573897       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I0717 23:26:09.580566       1 shared_informer.go:318] Caches are synced for persistent volume
	I0717 23:26:09.638659       1 shared_informer.go:318] Caches are synced for TTL
	I0717 23:26:09.638716       1 shared_informer.go:318] Caches are synced for taint
	I0717 23:26:09.639022       1 node_lifecycle_controller.go:1223] "Initializing eviction metric for zone" zone=""
	I0717 23:26:09.639133       1 node_lifecycle_controller.go:875] "Missing timestamp for Node. Assuming now as a timestamp" node="newest-cni-958000"
	I0717 23:26:09.639739       1 node_lifecycle_controller.go:1069] "Controller detected that zone is now in new state" zone="" newState=Normal
	I0717 23:26:09.639827       1 taint_manager.go:206] "Starting NoExecuteTaintManager"
	I0717 23:26:09.639979       1 taint_manager.go:211] "Sending events to api server"
	I0717 23:26:09.639996       1 event.go:307] "Event occurred" object="newest-cni-958000" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node newest-cni-958000 event: Registered Node newest-cni-958000 in Controller"
	I0717 23:26:09.658380       1 shared_informer.go:318] Caches are synced for resource quota
	I0717 23:26:09.661203       1 shared_informer.go:318] Caches are synced for resource quota
	I0717 23:26:09.977039       1 shared_informer.go:318] Caches are synced for garbage collector
	I0717 23:26:09.978440       1 shared_informer.go:318] Caches are synced for garbage collector
	I0717 23:26:09.978463       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	
	* 
	* ==> kube-controller-manager [d742b3e72f9a] <==
	* I0717 23:24:50.016551       1 shared_informer.go:318] Caches are synced for expand
	I0717 23:24:50.017628       1 shared_informer.go:318] Caches are synced for deployment
	I0717 23:24:50.023667       1 shared_informer.go:318] Caches are synced for TTL after finished
	I0717 23:24:50.044424       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I0717 23:24:50.097549       1 shared_informer.go:318] Caches are synced for crt configmap
	I0717 23:24:50.103420       1 shared_informer.go:318] Caches are synced for endpoint
	I0717 23:24:50.130251       1 shared_informer.go:318] Caches are synced for ReplicationController
	I0717 23:24:50.135141       1 shared_informer.go:318] Caches are synced for bootstrap_signer
	I0717 23:24:50.172633       1 shared_informer.go:318] Caches are synced for resource quota
	I0717 23:24:50.218810       1 shared_informer.go:318] Caches are synced for disruption
	I0717 23:24:50.220262       1 shared_informer.go:318] Caches are synced for resource quota
	I0717 23:24:50.329965       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-vgrgn"
	I0717 23:24:50.336027       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-2qwmv"
	I0717 23:24:50.536007       1 shared_informer.go:318] Caches are synced for garbage collector
	I0717 23:24:50.587722       1 shared_informer.go:318] Caches are synced for garbage collector
	I0717 23:24:50.587820       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0717 23:24:50.771807       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5d78c9869d to 2"
	I0717 23:24:50.897141       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5d78c9869d to 1 from 2"
	I0717 23:24:51.024597       1 event.go:307] "Event occurred" object="kube-system/coredns-5d78c9869d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5d78c9869d-6l7gl"
	I0717 23:24:51.141891       1 event.go:307] "Event occurred" object="kube-system/coredns-5d78c9869d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5d78c9869d-78dd9"
	I0717 23:24:51.176107       1 event.go:307] "Event occurred" object="kube-system/coredns-5d78c9869d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5d78c9869d-6l7gl"
	I0717 23:24:53.759697       1 event.go:307] "Event occurred" object="kube-system/metrics-server" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-74d5c6b9c to 1"
	I0717 23:24:53.765252       1 event.go:307] "Event occurred" object="kube-system/metrics-server-74d5c6b9c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"metrics-server-74d5c6b9c-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found"
	E0717 23:24:53.770121       1 replica_set.go:544] sync "kube-system/metrics-server-74d5c6b9c" failed with pods "metrics-server-74d5c6b9c-" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount "metrics-server" not found
	I0717 23:24:53.774729       1 event.go:307] "Event occurred" object="kube-system/metrics-server-74d5c6b9c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-74d5c6b9c-v6xx7"
	
	* 
	* ==> kube-proxy [290c651ad910] <==
	* I0717 23:24:51.939075       1 node.go:141] Successfully retrieved node IP: 192.168.67.2
	I0717 23:24:51.939160       1 server_others.go:110] "Detected node IP" address="192.168.67.2"
	I0717 23:24:51.939185       1 server_others.go:554] "Using iptables proxy"
	I0717 23:24:51.968165       1 server_others.go:192] "Using iptables Proxier"
	I0717 23:24:51.968252       1 server_others.go:199] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0717 23:24:51.968260       1 server_others.go:200] "Creating dualStackProxier for iptables"
	I0717 23:24:51.968276       1 server_others.go:484] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, defaulting to no-op detect-local for IPv6"
	I0717 23:24:51.968298       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0717 23:24:51.968879       1 server.go:658] "Version info" version="v1.27.3"
	I0717 23:24:51.968946       1 server.go:660] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 23:24:51.971242       1 config.go:188] "Starting service config controller"
	I0717 23:24:51.972640       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0717 23:24:51.973381       1 config.go:315] "Starting node config controller"
	I0717 23:24:51.973415       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0717 23:24:51.973473       1 config.go:97] "Starting endpoint slice config controller"
	I0717 23:24:51.973499       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0717 23:24:52.074361       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0717 23:24:52.074404       1 shared_informer.go:318] Caches are synced for service config
	I0717 23:24:52.074490       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-proxy [e0b6a3ada1e1] <==
	* I0717 23:25:30.547806       1 node.go:141] Successfully retrieved node IP: 192.168.67.2
	I0717 23:25:30.547864       1 server_others.go:110] "Detected node IP" address="192.168.67.2"
	I0717 23:25:30.547886       1 server_others.go:554] "Using iptables proxy"
	I0717 23:25:30.650654       1 server_others.go:192] "Using iptables Proxier"
	I0717 23:25:30.650675       1 server_others.go:199] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0717 23:25:30.650684       1 server_others.go:200] "Creating dualStackProxier for iptables"
	I0717 23:25:30.650695       1 server_others.go:484] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, defaulting to no-op detect-local for IPv6"
	I0717 23:25:30.650718       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0717 23:25:30.651324       1 server.go:658] "Version info" version="v1.27.3"
	I0717 23:25:30.651332       1 server.go:660] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 23:25:30.656340       1 config.go:315] "Starting node config controller"
	I0717 23:25:30.656357       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0717 23:25:30.656920       1 config.go:97] "Starting endpoint slice config controller"
	I0717 23:25:30.656927       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0717 23:25:30.657096       1 config.go:188] "Starting service config controller"
	I0717 23:25:30.657110       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0717 23:25:30.757099       1 shared_informer.go:318] Caches are synced for node config
	I0717 23:25:30.757158       1 shared_informer.go:318] Caches are synced for service config
	I0717 23:25:30.757277       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [03a597dde7e8] <==
	* W0717 23:24:35.154960       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0717 23:24:35.155178       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0717 23:24:35.155333       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0717 23:24:35.155624       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0717 23:24:35.155454       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0717 23:24:35.155849       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0717 23:24:35.155460       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0717 23:24:35.155860       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0717 23:24:35.156825       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0717 23:24:35.156903       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0717 23:24:36.156418       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0717 23:24:36.156467       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0717 23:24:36.181567       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0717 23:24:36.181628       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0717 23:24:36.237032       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0717 23:24:36.237183       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0717 23:24:36.252299       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0717 23:24:36.252357       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0717 23:24:36.382176       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0717 23:24:36.382286       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0717 23:24:36.397445       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0717 23:24:36.397505       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0717 23:24:36.751460       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0717 23:24:54.556680       1 scheduling_queue.go:1135] "Error while retrieving next pod from scheduling queue" err="scheduling queue is closed"
	E0717 23:24:54.556779       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kube-scheduler [200d42a5c64f] <==
	* W0717 23:25:26.745896       1 feature_gate.go:241] Setting GA feature gate ServerSideApply=true. It will be removed in a future release.
	I0717 23:25:27.251169       1 serving.go:348] Generated self-signed cert in-memory
	W0717 23:25:28.992796       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0717 23:25:28.992880       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0717 23:25:28.992889       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0717 23:25:28.992893       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0717 23:25:29.039601       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.27.3"
	I0717 23:25:29.039890       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 23:25:29.041829       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0717 23:25:29.042141       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0717 23:25:29.042277       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0717 23:25:29.042401       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0717 23:25:29.142796       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Jul 17 23:26:10 newest-cni-958000 kubelet[3686]: I0717 23:26:10.842882    3686 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpx8c\" (UniqueName: \"kubernetes.io/projected/33d990c3-b0c6-4bd3-a8c9-a97793a4d90a-kube-api-access-cpx8c\") pod \"kube-proxy-vgrgn\" (UID: \"33d990c3-b0c6-4bd3-a8c9-a97793a4d90a\") " pod="kube-system/kube-proxy-vgrgn"
	Jul 17 23:26:10 newest-cni-958000 kubelet[3686]: I0717 23:26:10.842932    3686 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/ef5de39d-c3b1-4c33-a780-1c8b7f590356-cni-cfg\") pod \"kindnet-2qwmv\" (UID: \"ef5de39d-c3b1-4c33-a780-1c8b7f590356\") " pod="kube-system/kindnet-2qwmv"
	Jul 17 23:26:10 newest-cni-958000 kubelet[3686]: I0717 23:26:10.843000    3686 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ef5de39d-c3b1-4c33-a780-1c8b7f590356-lib-modules\") pod \"kindnet-2qwmv\" (UID: \"ef5de39d-c3b1-4c33-a780-1c8b7f590356\") " pod="kube-system/kindnet-2qwmv"
	Jul 17 23:26:10 newest-cni-958000 kubelet[3686]: I0717 23:26:10.843058    3686 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kr58q\" (UniqueName: \"kubernetes.io/projected/de57e5a7-c7e0-4452-85ff-1a3b1d22f072-kube-api-access-kr58q\") pod \"coredns-5d78c9869d-78dd9\" (UID: \"de57e5a7-c7e0-4452-85ff-1a3b1d22f072\") " pod="kube-system/coredns-5d78c9869d-78dd9"
	Jul 17 23:26:10 newest-cni-958000 kubelet[3686]: I0717 23:26:10.843125    3686 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxn6p\" (UniqueName: \"kubernetes.io/projected/758088d8-d032-45f2-8373-0d46b877596f-kube-api-access-kxn6p\") pod \"metrics-server-74d5c6b9c-v6xx7\" (UID: \"758088d8-d032-45f2-8373-0d46b877596f\") " pod="kube-system/metrics-server-74d5c6b9c-v6xx7"
	Jul 17 23:26:10 newest-cni-958000 kubelet[3686]: I0717 23:26:10.843170    3686 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/33d990c3-b0c6-4bd3-a8c9-a97793a4d90a-xtables-lock\") pod \"kube-proxy-vgrgn\" (UID: \"33d990c3-b0c6-4bd3-a8c9-a97793a4d90a\") " pod="kube-system/kube-proxy-vgrgn"
	Jul 17 23:26:10 newest-cni-958000 kubelet[3686]: I0717 23:26:10.843200    3686 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2tl6\" (UniqueName: \"kubernetes.io/projected/807d16b2-39e3-48df-8d35-1bc6defd1534-kube-api-access-l2tl6\") pod \"dashboard-metrics-scraper-59c665bc77-z5fjd\" (UID: \"807d16b2-39e3-48df-8d35-1bc6defd1534\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-59c665bc77-z5fjd"
	Jul 17 23:26:10 newest-cni-958000 kubelet[3686]: I0717 23:26:10.843222    3686 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/33d990c3-b0c6-4bd3-a8c9-a97793a4d90a-kube-proxy\") pod \"kube-proxy-vgrgn\" (UID: \"33d990c3-b0c6-4bd3-a8c9-a97793a4d90a\") " pod="kube-system/kube-proxy-vgrgn"
	Jul 17 23:26:10 newest-cni-958000 kubelet[3686]: I0717 23:26:10.843258    3686 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ef5de39d-c3b1-4c33-a780-1c8b7f590356-xtables-lock\") pod \"kindnet-2qwmv\" (UID: \"ef5de39d-c3b1-4c33-a780-1c8b7f590356\") " pod="kube-system/kindnet-2qwmv"
	Jul 17 23:26:10 newest-cni-958000 kubelet[3686]: I0717 23:26:10.843315    3686 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/807d16b2-39e3-48df-8d35-1bc6defd1534-tmp-volume\") pod \"dashboard-metrics-scraper-59c665bc77-z5fjd\" (UID: \"807d16b2-39e3-48df-8d35-1bc6defd1534\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-59c665bc77-z5fjd"
	Jul 17 23:26:10 newest-cni-958000 kubelet[3686]: I0717 23:26:10.843426    3686 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9svj\" (UniqueName: \"kubernetes.io/projected/5ae875a2-6788-4760-929d-65fc7520e407-kube-api-access-j9svj\") pod \"kubernetes-dashboard-5c5cfc8747-b8qmr\" (UID: \"5ae875a2-6788-4760-929d-65fc7520e407\") " pod="kubernetes-dashboard/kubernetes-dashboard-5c5cfc8747-b8qmr"
	Jul 17 23:26:10 newest-cni-958000 kubelet[3686]: I0717 23:26:10.843487    3686 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/33d990c3-b0c6-4bd3-a8c9-a97793a4d90a-lib-modules\") pod \"kube-proxy-vgrgn\" (UID: \"33d990c3-b0c6-4bd3-a8c9-a97793a4d90a\") " pod="kube-system/kube-proxy-vgrgn"
	Jul 17 23:26:10 newest-cni-958000 kubelet[3686]: I0717 23:26:10.843599    3686 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/de57e5a7-c7e0-4452-85ff-1a3b1d22f072-config-volume\") pod \"coredns-5d78c9869d-78dd9\" (UID: \"de57e5a7-c7e0-4452-85ff-1a3b1d22f072\") " pod="kube-system/coredns-5d78c9869d-78dd9"
	Jul 17 23:26:10 newest-cni-958000 kubelet[3686]: I0717 23:26:10.843625    3686 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/08822a3c-72fd-4c06-abfd-98dcb808d89c-tmp\") pod \"storage-provisioner\" (UID: \"08822a3c-72fd-4c06-abfd-98dcb808d89c\") " pod="kube-system/storage-provisioner"
	Jul 17 23:26:10 newest-cni-958000 kubelet[3686]: I0717 23:26:10.843644    3686 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/758088d8-d032-45f2-8373-0d46b877596f-tmp-dir\") pod \"metrics-server-74d5c6b9c-v6xx7\" (UID: \"758088d8-d032-45f2-8373-0d46b877596f\") " pod="kube-system/metrics-server-74d5c6b9c-v6xx7"
	Jul 17 23:26:10 newest-cni-958000 kubelet[3686]: I0717 23:26:10.843661    3686 reconciler.go:41] "Reconciler: start to sync state"
	Jul 17 23:26:11 newest-cni-958000 kubelet[3686]: I0717 23:26:11.062038    3686 scope.go:115] "RemoveContainer" containerID="2d0fd704caeb3026651e755beddb6a05f2d3a2fad2e8b9588b43745c0c25d924"
	Jul 17 23:26:11 newest-cni-958000 kubelet[3686]: I0717 23:26:11.762675    3686 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="910479620edf8dabf7a625d485714683f2ea088239c702e2f9966eff21a479a7"
	Jul 17 23:26:11 newest-cni-958000 kubelet[3686]: I0717 23:26:11.958406    3686 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="86e1c46ff1c55b227d36ce3eb60a8bdb0e4260aa64b155545c25759bdeabee70"
	Jul 17 23:26:11 newest-cni-958000 kubelet[3686]: I0717 23:26:11.968273    3686 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="45d6dca7e44a993a0acc113e844ea29383a5a51be185cae833102d5aaab45237"
	Jul 17 23:26:11 newest-cni-958000 kubelet[3686]: E0717 23:26:11.977251    3686 kubelet.go:1856] "Failed creating a mirror pod for" err="pods \"kube-scheduler-newest-cni-958000\" already exists" pod="kube-system/kube-scheduler-newest-cni-958000"
	Jul 17 23:26:11 newest-cni-958000 kubelet[3686]: E0717 23:26:11.977974    3686 kubelet.go:1856] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-newest-cni-958000\" already exists" pod="kube-system/kube-controller-manager-newest-cni-958000"
	Jul 17 23:26:11 newest-cni-958000 kubelet[3686]: E0717 23:26:11.978318    3686 kubelet.go:1856] "Failed creating a mirror pod for" err="pods \"kube-apiserver-newest-cni-958000\" already exists" pod="kube-system/kube-apiserver-newest-cni-958000"
	Jul 17 23:26:11 newest-cni-958000 kubelet[3686]: E0717 23:26:11.978379    3686 kubelet.go:1856] "Failed creating a mirror pod for" err="pods \"etcd-newest-cni-958000\" already exists" pod="kube-system/etcd-newest-cni-958000"
	Jul 17 23:26:12 newest-cni-958000 kubelet[3686]: I0717 23:26:12.988796    3686 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e56a04d8c11f13393d19806ca37bda316d020a73e9344f37eafb77cad81b7c76"
	
	* 
	* ==> storage-provisioner [2d0fd704caeb] <==
	* I0717 23:25:30.549065       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0717 23:26:09.188264       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
	
	* 
	* ==> storage-provisioner [43a70c51d79c] <==
	* I0717 23:26:11.341536       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0717 23:26:11.358632       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0717 23:26:11.358688       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-958000 -n newest-cni-958000
helpers_test.go:261: (dbg) Run:  kubectl --context newest-cni-958000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-74d5c6b9c-v6xx7 dashboard-metrics-scraper-59c665bc77-z5fjd kubernetes-dashboard-5c5cfc8747-b8qmr
helpers_test.go:274: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context newest-cni-958000 describe pod metrics-server-74d5c6b9c-v6xx7 dashboard-metrics-scraper-59c665bc77-z5fjd kubernetes-dashboard-5c5cfc8747-b8qmr
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context newest-cni-958000 describe pod metrics-server-74d5c6b9c-v6xx7 dashboard-metrics-scraper-59c665bc77-z5fjd kubernetes-dashboard-5c5cfc8747-b8qmr: exit status 1 (94.108474ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-74d5c6b9c-v6xx7" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-59c665bc77-z5fjd" not found
	Error from server (NotFound): pods "kubernetes-dashboard-5c5cfc8747-b8qmr" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context newest-cni-958000 describe pod metrics-server-74d5c6b9c-v6xx7 dashboard-metrics-scraper-59c665bc77-z5fjd kubernetes-dashboard-5c5cfc8747-b8qmr: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect newest-cni-958000
helpers_test.go:235: (dbg) docker inspect newest-cni-958000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2f07d08fcac68bc124198b04526876df655e3bc0c0cb463e4ec900bc7d08970f",
	        "Created": "2023-07-17T23:24:23.225622771Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1325833,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-07-17T23:25:07.13089368Z",
	            "FinishedAt": "2023-07-17T23:25:05.304121825Z"
	        },
	        "Image": "sha256:c6cc01e6091959400f260dc442708e7c71630b58dab1f7c344cb00926bd84950",
	        "ResolvConfPath": "/var/lib/docker/containers/2f07d08fcac68bc124198b04526876df655e3bc0c0cb463e4ec900bc7d08970f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2f07d08fcac68bc124198b04526876df655e3bc0c0cb463e4ec900bc7d08970f/hostname",
	        "HostsPath": "/var/lib/docker/containers/2f07d08fcac68bc124198b04526876df655e3bc0c0cb463e4ec900bc7d08970f/hosts",
	        "LogPath": "/var/lib/docker/containers/2f07d08fcac68bc124198b04526876df655e3bc0c0cb463e4ec900bc7d08970f/2f07d08fcac68bc124198b04526876df655e3bc0c0cb463e4ec900bc7d08970f-json.log",
	        "Name": "/newest-cni-958000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-958000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-958000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/bc067305dc1daa5af8b80f08154a95cc9b1d413de8fa1dd95b1d5f6e6bad23f6-init/diff:/var/lib/docker/overlay2/388817d1807139a2b5fe2987f16fc65d58f6720a0b0343097a59eb837a278a0e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/bc067305dc1daa5af8b80f08154a95cc9b1d413de8fa1dd95b1d5f6e6bad23f6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/bc067305dc1daa5af8b80f08154a95cc9b1d413de8fa1dd95b1d5f6e6bad23f6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/bc067305dc1daa5af8b80f08154a95cc9b1d413de8fa1dd95b1d5f6e6bad23f6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-958000",
	                "Source": "/var/lib/docker/volumes/newest-cni-958000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-958000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-958000",
	                "name.minikube.sigs.k8s.io": "newest-cni-958000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "de349050690f96ccf7829a7b7b7205acc105069488f243c425892eaad4dea234",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58402"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58403"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58404"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58400"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "58401"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/de349050690f",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-958000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "2f07d08fcac6",
	                        "newest-cni-958000"
	                    ],
	                    "NetworkID": "762eaeabef634f98c192987f9503cdd053428e8e0cf233912e0851abd54ef938",
	                    "EndpointID": "9b730e06e664033e3b7fd17e273fca24db1c48d1c104c08427de98c7d2577622",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-958000 -n newest-cni-958000
helpers_test.go:244: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p newest-cni-958000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p newest-cni-958000 logs -n 25: (3.866747875s)
helpers_test.go:252: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p embed-certs-306000                                  | embed-certs-306000           | jenkins | v1.31.0 | 17 Jul 23 16:10 PDT | 17 Jul 23 16:16 PDT |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=docker                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                           |                              |         |         |                     |                     |
	| ssh     | -p embed-certs-306000 sudo                             | embed-certs-306000           | jenkins | v1.31.0 | 17 Jul 23 16:16 PDT | 17 Jul 23 16:16 PDT |
	|         | crictl images -o json                                  |                              |         |         |                     |                     |
	| pause   | -p embed-certs-306000                                  | embed-certs-306000           | jenkins | v1.31.0 | 17 Jul 23 16:16 PDT | 17 Jul 23 16:16 PDT |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p embed-certs-306000                                  | embed-certs-306000           | jenkins | v1.31.0 | 17 Jul 23 16:16 PDT | 17 Jul 23 16:16 PDT |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p embed-certs-306000                                  | embed-certs-306000           | jenkins | v1.31.0 | 17 Jul 23 16:16 PDT | 17 Jul 23 16:16 PDT |
	| delete  | -p embed-certs-306000                                  | embed-certs-306000           | jenkins | v1.31.0 | 17 Jul 23 16:16 PDT | 17 Jul 23 16:16 PDT |
	| delete  | -p                                                     | disable-driver-mounts-278000 | jenkins | v1.31.0 | 17 Jul 23 16:16 PDT | 17 Jul 23 16:16 PDT |
	|         | disable-driver-mounts-278000                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-651000 | jenkins | v1.31.0 | 17 Jul 23 16:16 PDT | 17 Jul 23 16:17 PDT |
	|         | default-k8s-diff-port-651000                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-651000  | default-k8s-diff-port-651000 | jenkins | v1.31.0 | 17 Jul 23 16:17 PDT | 17 Jul 23 16:17 PDT |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-651000 | jenkins | v1.31.0 | 17 Jul 23 16:17 PDT | 17 Jul 23 16:18 PDT |
	|         | default-k8s-diff-port-651000                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-651000       | default-k8s-diff-port-651000 | jenkins | v1.31.0 | 17 Jul 23 16:18 PDT | 17 Jul 23 16:18 PDT |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-651000 | jenkins | v1.31.0 | 17 Jul 23 16:18 PDT | 17 Jul 23 16:23 PDT |
	|         | default-k8s-diff-port-651000                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3                           |                              |         |         |                     |                     |
	| ssh     | -p                                                     | default-k8s-diff-port-651000 | jenkins | v1.31.0 | 17 Jul 23 16:24 PDT | 17 Jul 23 16:24 PDT |
	|         | default-k8s-diff-port-651000                           |                              |         |         |                     |                     |
	|         | sudo crictl images -o json                             |                              |         |         |                     |                     |
	| pause   | -p                                                     | default-k8s-diff-port-651000 | jenkins | v1.31.0 | 17 Jul 23 16:24 PDT | 17 Jul 23 16:24 PDT |
	|         | default-k8s-diff-port-651000                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p                                                     | default-k8s-diff-port-651000 | jenkins | v1.31.0 | 17 Jul 23 16:24 PDT | 17 Jul 23 16:24 PDT |
	|         | default-k8s-diff-port-651000                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-651000 | jenkins | v1.31.0 | 17 Jul 23 16:24 PDT | 17 Jul 23 16:24 PDT |
	|         | default-k8s-diff-port-651000                           |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-651000 | jenkins | v1.31.0 | 17 Jul 23 16:24 PDT | 17 Jul 23 16:24 PDT |
	|         | default-k8s-diff-port-651000                           |                              |         |         |                     |                     |
	| start   | -p newest-cni-958000 --memory=2200 --alsologtostderr   | newest-cni-958000            | jenkins | v1.31.0 | 17 Jul 23 16:24 PDT | 17 Jul 23 16:24 PDT |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=docker  --kubernetes-version=v1.27.3          |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-958000             | newest-cni-958000            | jenkins | v1.31.0 | 17 Jul 23 16:24 PDT | 17 Jul 23 16:24 PDT |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-958000                                   | newest-cni-958000            | jenkins | v1.31.0 | 17 Jul 23 16:24 PDT | 17 Jul 23 16:25 PDT |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-958000                  | newest-cni-958000            | jenkins | v1.31.0 | 17 Jul 23 16:25 PDT | 17 Jul 23 16:25 PDT |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-958000 --memory=2200 --alsologtostderr   | newest-cni-958000            | jenkins | v1.31.0 | 17 Jul 23 16:25 PDT | 17 Jul 23 16:25 PDT |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=docker  --kubernetes-version=v1.27.3          |                              |         |         |                     |                     |
	| ssh     | -p newest-cni-958000 sudo                              | newest-cni-958000            | jenkins | v1.31.0 | 17 Jul 23 16:25 PDT | 17 Jul 23 16:25 PDT |
	|         | crictl images -o json                                  |                              |         |         |                     |                     |
	| pause   | -p newest-cni-958000                                   | newest-cni-958000            | jenkins | v1.31.0 | 17 Jul 23 16:25 PDT | 17 Jul 23 16:25 PDT |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-958000                                   | newest-cni-958000            | jenkins | v1.31.0 | 17 Jul 23 16:26 PDT | 17 Jul 23 16:26 PDT |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/07/17 16:25:06
	Running on machine: MacOS-Agent-1
	Binary: Built with gc go1.20.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 16:25:06.009437   95547 out.go:296] Setting OutFile to fd 1 ...
	I0717 16:25:06.009596   95547 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 16:25:06.009601   95547 out.go:309] Setting ErrFile to fd 2...
	I0717 16:25:06.009605   95547 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 16:25:06.009788   95547 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/16899-76867/.minikube/bin
	I0717 16:25:06.011309   95547 out.go:303] Setting JSON to false
	I0717 16:25:06.031026   95547 start.go:128] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":26674,"bootTime":1689609632,"procs":432,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0717 16:25:06.031121   95547 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0717 16:25:06.089845   95547 out.go:177] * [newest-cni-958000] minikube v1.31.0 on Darwin 13.4.1
	I0717 16:25:06.110752   95547 out.go:177]   - MINIKUBE_LOCATION=16899
	I0717 16:25:06.110751   95547 notify.go:220] Checking for updates...
	I0717 16:25:06.131899   95547 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16899-76867/kubeconfig
	I0717 16:25:06.152973   95547 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0717 16:25:06.194927   95547 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 16:25:06.216130   95547 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16899-76867/.minikube
	I0717 16:25:06.237662   95547 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 16:25:06.259830   95547 config.go:182] Loaded profile config "newest-cni-958000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0717 16:25:06.260607   95547 driver.go:373] Setting default libvirt URI to qemu:///system
	I0717 16:25:06.316213   95547 docker.go:121] docker version: linux-24.0.2:Docker Desktop 4.21.1 (114176)
	I0717 16:25:06.316332   95547 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 16:25:06.421076   95547 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:64 OomKillDisable:false NGoroutines:68 SystemTime:2023-07-17 23:25:06.408164421 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:5.15.49-linuxkit-pr OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexS
erverAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=built
in name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Pat
h:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.6] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Command line tool for Docker Scout Vendor:Docker Inc. Version:0.16.1]] Warnings:<n
il>}}
	I0717 16:25:06.442961   95547 out.go:177] * Using the docker driver based on existing profile
	I0717 16:25:06.485802   95547 start.go:298] selected driver: docker
	I0717 16:25:06.485854   95547 start.go:880] validating driver "docker" against &{Name:newest-cni-958000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:newest-cni-958000 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNo
deRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 16:25:06.485978   95547 start.go:891] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 16:25:06.490018   95547 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 16:25:06.592476   95547 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:64 OomKillDisable:false NGoroutines:68 SystemTime:2023-07-17 23:25:06.580883045 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:5.15.49-linuxkit-pr OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexS
erverAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=built
in name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Pat
h:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.6] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Command line tool for Docker Scout Vendor:Docker Inc. Version:0.16.1]] Warnings:<n
il>}}
	I0717 16:25:06.592703   95547 start_flags.go:938] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0717 16:25:06.592726   95547 cni.go:84] Creating CNI manager for ""
	I0717 16:25:06.592738   95547 cni.go:149] "docker" driver + "docker" runtime found, recommending kindnet
	I0717 16:25:06.592749   95547 start_flags.go:319] config:
	{Name:newest-cni-958000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:newest-cni-958000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: Networ
kPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString
:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 16:25:06.636142   95547 out.go:177] * Starting control plane node newest-cni-958000 in cluster newest-cni-958000
	I0717 16:25:06.657526   95547 cache.go:122] Beginning downloading kic base image for docker with docker
	I0717 16:25:06.701246   95547 out.go:177] * Pulling base image ...
	I0717 16:25:06.722545   95547 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0717 16:25:06.722538   95547 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon
	I0717 16:25:06.722637   95547 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16899-76867/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-amd64.tar.lz4
	I0717 16:25:06.722662   95547 cache.go:57] Caching tarball of preloaded images
	I0717 16:25:06.722849   95547 preload.go:174] Found /Users/jenkins/minikube-integration/16899-76867/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0717 16:25:06.722871   95547 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0717 16:25:06.723822   95547 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/newest-cni-958000/config.json ...
	I0717 16:25:06.773809   95547 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon, skipping pull
	I0717 16:25:06.773830   95547 cache.go:145] gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 exists in daemon, skipping load
	I0717 16:25:06.773849   95547 cache.go:195] Successfully downloaded all kic artifacts
	I0717 16:25:06.773902   95547 start.go:365] acquiring machines lock for newest-cni-958000: {Name:mke5d528d9e88e8bdafae9a78be680113515a9a9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0717 16:25:06.773988   95547 start.go:369] acquired machines lock for "newest-cni-958000" in 65.802µs
	I0717 16:25:06.774027   95547 start.go:96] Skipping create...Using existing machine configuration
	I0717 16:25:06.774036   95547 fix.go:54] fixHost starting: 
	I0717 16:25:06.774260   95547 cli_runner.go:164] Run: docker container inspect newest-cni-958000 --format={{.State.Status}}
	I0717 16:25:06.827574   95547 fix.go:102] recreateIfNeeded on newest-cni-958000: state=Stopped err=<nil>
	W0717 16:25:06.827609   95547 fix.go:128] unexpected machine state, will restart: <nil>
	I0717 16:25:06.849431   95547 out.go:177] * Restarting existing docker container for "newest-cni-958000" ...
	I0717 16:25:06.892254   95547 cli_runner.go:164] Run: docker start newest-cni-958000
	I0717 16:25:07.137692   95547 cli_runner.go:164] Run: docker container inspect newest-cni-958000 --format={{.State.Status}}
	I0717 16:25:07.191261   95547 kic.go:426] container "newest-cni-958000" state is running.
	I0717 16:25:07.192963   95547 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-958000
	I0717 16:25:07.249585   95547 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/newest-cni-958000/config.json ...
	I0717 16:25:07.249963   95547 machine.go:88] provisioning docker machine ...
	I0717 16:25:07.249988   95547 ubuntu.go:169] provisioning hostname "newest-cni-958000"
	I0717 16:25:07.250075   95547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-958000
	I0717 16:25:07.309695   95547 main.go:141] libmachine: Using SSH client type: native
	I0717 16:25:07.310266   95547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140d2c0] 0x1410360 <nil>  [] 0s} 127.0.0.1 58402 <nil> <nil>}
	I0717 16:25:07.310286   95547 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-958000 && echo "newest-cni-958000" | sudo tee /etc/hostname
	I0717 16:25:07.311626   95547 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0717 16:25:10.457012   95547 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-958000
	
	I0717 16:25:10.457115   95547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-958000
	I0717 16:25:10.508111   95547 main.go:141] libmachine: Using SSH client type: native
	I0717 16:25:10.508463   95547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140d2c0] 0x1410360 <nil>  [] 0s} 127.0.0.1 58402 <nil> <nil>}
	I0717 16:25:10.508476   95547 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-958000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-958000/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-958000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0717 16:25:10.637014   95547 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 16:25:10.637034   95547 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/16899-76867/.minikube CaCertPath:/Users/jenkins/minikube-integration/16899-76867/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/16899-76867/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/16899-76867/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/16899-76867/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/16899-76867/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/16899-76867/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/16899-76867/.minikube}
	I0717 16:25:10.637061   95547 ubuntu.go:177] setting up certificates
	I0717 16:25:10.637070   95547 provision.go:83] configureAuth start
	I0717 16:25:10.637143   95547 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-958000
	I0717 16:25:10.688211   95547 provision.go:138] copyHostCerts
	I0717 16:25:10.688327   95547 exec_runner.go:144] found /Users/jenkins/minikube-integration/16899-76867/.minikube/ca.pem, removing ...
	I0717 16:25:10.688340   95547 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/16899-76867/.minikube/ca.pem
	I0717 16:25:10.688433   95547 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16899-76867/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/16899-76867/.minikube/ca.pem (1078 bytes)
	I0717 16:25:10.688654   95547 exec_runner.go:144] found /Users/jenkins/minikube-integration/16899-76867/.minikube/cert.pem, removing ...
	I0717 16:25:10.688661   95547 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/16899-76867/.minikube/cert.pem
	I0717 16:25:10.688722   95547 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16899-76867/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/16899-76867/.minikube/cert.pem (1123 bytes)
	I0717 16:25:10.688885   95547 exec_runner.go:144] found /Users/jenkins/minikube-integration/16899-76867/.minikube/key.pem, removing ...
	I0717 16:25:10.688890   95547 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/16899-76867/.minikube/key.pem
	I0717 16:25:10.688954   95547 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16899-76867/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/16899-76867/.minikube/key.pem (1675 bytes)
	I0717 16:25:10.689095   95547 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/16899-76867/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/16899-76867/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/16899-76867/.minikube/certs/ca-key.pem org=jenkins.newest-cni-958000 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube newest-cni-958000]
	I0717 16:25:10.742105   95547 provision.go:172] copyRemoteCerts
	I0717 16:25:10.742156   95547 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0717 16:25:10.742207   95547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-958000
	I0717 16:25:10.794450   95547 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58402 SSHKeyPath:/Users/jenkins/minikube-integration/16899-76867/.minikube/machines/newest-cni-958000/id_rsa Username:docker}
	I0717 16:25:10.888372   95547 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0717 16:25:10.909833   95547 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0717 16:25:10.931296   95547 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0717 16:25:10.954241   95547 provision.go:86] duration metric: configureAuth took 317.148573ms
	I0717 16:25:10.954254   95547 ubuntu.go:193] setting minikube options for container-runtime
	I0717 16:25:10.954415   95547 config.go:182] Loaded profile config "newest-cni-958000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0717 16:25:10.954486   95547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-958000
	I0717 16:25:11.006980   95547 main.go:141] libmachine: Using SSH client type: native
	I0717 16:25:11.007328   95547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140d2c0] 0x1410360 <nil>  [] 0s} 127.0.0.1 58402 <nil> <nil>}
	I0717 16:25:11.007338   95547 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0717 16:25:11.136352   95547 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0717 16:25:11.136367   95547 ubuntu.go:71] root file system type: overlay
	I0717 16:25:11.136454   95547 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0717 16:25:11.136539   95547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-958000
	I0717 16:25:11.189136   95547 main.go:141] libmachine: Using SSH client type: native
	I0717 16:25:11.189503   95547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140d2c0] 0x1410360 <nil>  [] 0s} 127.0.0.1 58402 <nil> <nil>}
	I0717 16:25:11.189554   95547 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0717 16:25:11.328134   95547 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0717 16:25:11.328248   95547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-958000
	I0717 16:25:11.381008   95547 main.go:141] libmachine: Using SSH client type: native
	I0717 16:25:11.381375   95547 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140d2c0] 0x1410360 <nil>  [] 0s} 127.0.0.1 58402 <nil> <nil>}
	I0717 16:25:11.381389   95547 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0717 16:25:11.515613   95547 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0717 16:25:11.515627   95547 machine.go:91] provisioned docker machine in 4.265587448s
	I0717 16:25:11.515638   95547 start.go:300] post-start starting for "newest-cni-958000" (driver="docker")
	I0717 16:25:11.515648   95547 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0717 16:25:11.515732   95547 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0717 16:25:11.515790   95547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-958000
	I0717 16:25:11.568259   95547 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58402 SSHKeyPath:/Users/jenkins/minikube-integration/16899-76867/.minikube/machines/newest-cni-958000/id_rsa Username:docker}
	I0717 16:25:11.662379   95547 ssh_runner.go:195] Run: cat /etc/os-release
	I0717 16:25:11.666476   95547 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0717 16:25:11.666501   95547 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0717 16:25:11.666509   95547 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0717 16:25:11.666514   95547 info.go:137] Remote host: Ubuntu 22.04.2 LTS
	I0717 16:25:11.666522   95547 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16899-76867/.minikube/addons for local assets ...
	I0717 16:25:11.666625   95547 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16899-76867/.minikube/files for local assets ...
	I0717 16:25:11.666772   95547 filesync.go:149] local asset: /Users/jenkins/minikube-integration/16899-76867/.minikube/files/etc/ssl/certs/773242.pem -> 773242.pem in /etc/ssl/certs
	I0717 16:25:11.666950   95547 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0717 16:25:11.676114   95547 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/files/etc/ssl/certs/773242.pem --> /etc/ssl/certs/773242.pem (1708 bytes)
	I0717 16:25:11.699074   95547 start.go:303] post-start completed in 183.421937ms
	I0717 16:25:11.699155   95547 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 16:25:11.699218   95547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-958000
	I0717 16:25:11.751120   95547 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58402 SSHKeyPath:/Users/jenkins/minikube-integration/16899-76867/.minikube/machines/newest-cni-958000/id_rsa Username:docker}
	I0717 16:25:11.841678   95547 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0717 16:25:11.846889   95547 fix.go:56] fixHost completed within 5.07277014s
	I0717 16:25:11.846903   95547 start.go:83] releasing machines lock for "newest-cni-958000", held for 5.07282584s
	I0717 16:25:11.846978   95547 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-958000
	I0717 16:25:11.899786   95547 ssh_runner.go:195] Run: cat /version.json
	I0717 16:25:11.899793   95547 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0717 16:25:11.899866   95547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-958000
	I0717 16:25:11.899886   95547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-958000
	I0717 16:25:11.956056   95547 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58402 SSHKeyPath:/Users/jenkins/minikube-integration/16899-76867/.minikube/machines/newest-cni-958000/id_rsa Username:docker}
	I0717 16:25:11.956065   95547 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58402 SSHKeyPath:/Users/jenkins/minikube-integration/16899-76867/.minikube/machines/newest-cni-958000/id_rsa Username:docker}
	I0717 16:25:12.045227   95547 ssh_runner.go:195] Run: systemctl --version
	I0717 16:25:12.156357   95547 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0717 16:25:12.162287   95547 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0717 16:25:12.180338   95547 cni.go:236] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0717 16:25:12.180425   95547 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0717 16:25:12.190049   95547 cni.go:265] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0717 16:25:12.190063   95547 start.go:466] detecting cgroup driver to use...
	I0717 16:25:12.190078   95547 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0717 16:25:12.190239   95547 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 16:25:12.206358   95547 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0717 16:25:12.216606   95547 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0717 16:25:12.226407   95547 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0717 16:25:12.226473   95547 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0717 16:25:12.236570   95547 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0717 16:25:12.246484   95547 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0717 16:25:12.256358   95547 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0717 16:25:12.266621   95547 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0717 16:25:12.275927   95547 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0717 16:25:12.285739   95547 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0717 16:25:12.294620   95547 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0717 16:25:12.303047   95547 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 16:25:12.380317   95547 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0717 16:25:12.453549   95547 start.go:466] detecting cgroup driver to use...
	I0717 16:25:12.453566   95547 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0717 16:25:12.453641   95547 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0717 16:25:12.466611   95547 cruntime.go:276] skipping containerd shutdown because we are bound to it
	I0717 16:25:12.466696   95547 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0717 16:25:12.480057   95547 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0717 16:25:12.498435   95547 ssh_runner.go:195] Run: which cri-dockerd
	I0717 16:25:12.503191   95547 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0717 16:25:12.513350   95547 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0717 16:25:12.555466   95547 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0717 16:25:12.673456   95547 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0717 16:25:12.773745   95547 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0717 16:25:12.773763   95547 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0717 16:25:12.791921   95547 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 16:25:12.877882   95547 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0717 16:25:13.154981   95547 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0717 16:25:13.219541   95547 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0717 16:25:13.295019   95547 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0717 16:25:13.363147   95547 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 16:25:13.433926   95547 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0717 16:25:13.447016   95547 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0717 16:25:13.523091   95547 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0717 16:25:13.603241   95547 start.go:513] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0717 16:25:13.603377   95547 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0717 16:25:13.608402   95547 start.go:534] Will wait 60s for crictl version
	I0717 16:25:13.608470   95547 ssh_runner.go:195] Run: which crictl
	I0717 16:25:13.613017   95547 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0717 16:25:13.659250   95547 start.go:550] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.4
	RuntimeApiVersion:  v1
	I0717 16:25:13.659347   95547 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0717 16:25:13.684487   95547 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0717 16:25:13.753095   95547 out.go:204] * Preparing Kubernetes v1.27.3 on Docker 24.0.4 ...
	I0717 16:25:13.753268   95547 cli_runner.go:164] Run: docker exec -t newest-cni-958000 dig +short host.docker.internal
	I0717 16:25:13.866872   95547 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I0717 16:25:13.867000   95547 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I0717 16:25:13.872298   95547 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 16:25:13.883304   95547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-958000
	I0717 16:25:13.958196   95547 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0717 16:25:13.981125   95547 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0717 16:25:13.981274   95547 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0717 16:25:14.004068   95547 docker.go:636] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.27.3
	registry.k8s.io/kube-scheduler:v1.27.3
	registry.k8s.io/kube-proxy:v1.27.3
	registry.k8s.io/kube-controller-manager:v1.27.3
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0717 16:25:14.004086   95547 docker.go:566] Images already preloaded, skipping extraction
	I0717 16:25:14.004186   95547 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0717 16:25:14.024670   95547 docker.go:636] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.27.3
	registry.k8s.io/kube-scheduler:v1.27.3
	registry.k8s.io/kube-proxy:v1.27.3
	registry.k8s.io/kube-controller-manager:v1.27.3
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0717 16:25:14.024690   95547 cache_images.go:84] Images are preloaded, skipping loading
	I0717 16:25:14.024801   95547 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0717 16:25:14.077002   95547 cni.go:84] Creating CNI manager for ""
	I0717 16:25:14.077018   95547 cni.go:149] "docker" driver + "docker" runtime found, recommending kindnet
	I0717 16:25:14.077035   95547 kubeadm.go:87] Using pod CIDR: 10.42.0.0/16
	I0717 16:25:14.077063   95547 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.27.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-958000 NodeName:newest-cni-958000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArgs:map[
] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0717 16:25:14.077185   95547 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "newest-cni-958000"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0717 16:25:14.077264   95547 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --feature-gates=ServerSideApply=true --hostname-override=newest-cni-958000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.3 ClusterName:newest-cni-958000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0717 16:25:14.077334   95547 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.3
	I0717 16:25:14.086761   95547 binaries.go:44] Found k8s binaries, skipping transfer
	I0717 16:25:14.086820   95547 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0717 16:25:14.095445   95547 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (415 bytes)
	I0717 16:25:14.112203   95547 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0717 16:25:14.128880   95547 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I0717 16:25:14.146543   95547 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0717 16:25:14.151422   95547 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0717 16:25:14.162716   95547 certs.go:56] Setting up /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/newest-cni-958000 for IP: 192.168.67.2
	I0717 16:25:14.162770   95547 certs.go:190] acquiring lock for shared ca certs: {Name:mk8dc1f2afa352f9c2168154d4ab47beda1b6a37 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 16:25:14.163001   95547 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/16899-76867/.minikube/ca.key
	I0717 16:25:14.163059   95547 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/16899-76867/.minikube/proxy-client-ca.key
	I0717 16:25:14.163153   95547 certs.go:315] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/newest-cni-958000/client.key
	I0717 16:25:14.163217   95547 certs.go:315] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/newest-cni-958000/apiserver.key.c7fa3a9e
	I0717 16:25:14.163302   95547 certs.go:315] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/newest-cni-958000/proxy-client.key
	I0717 16:25:14.163503   95547 certs.go:437] found cert: /Users/jenkins/minikube-integration/16899-76867/.minikube/certs/Users/jenkins/minikube-integration/16899-76867/.minikube/certs/77324.pem (1338 bytes)
	W0717 16:25:14.163540   95547 certs.go:433] ignoring /Users/jenkins/minikube-integration/16899-76867/.minikube/certs/Users/jenkins/minikube-integration/16899-76867/.minikube/certs/77324_empty.pem, impossibly tiny 0 bytes
	I0717 16:25:14.163552   95547 certs.go:437] found cert: /Users/jenkins/minikube-integration/16899-76867/.minikube/certs/Users/jenkins/minikube-integration/16899-76867/.minikube/certs/ca-key.pem (1679 bytes)
	I0717 16:25:14.163585   95547 certs.go:437] found cert: /Users/jenkins/minikube-integration/16899-76867/.minikube/certs/Users/jenkins/minikube-integration/16899-76867/.minikube/certs/ca.pem (1078 bytes)
	I0717 16:25:14.163623   95547 certs.go:437] found cert: /Users/jenkins/minikube-integration/16899-76867/.minikube/certs/Users/jenkins/minikube-integration/16899-76867/.minikube/certs/cert.pem (1123 bytes)
	I0717 16:25:14.163663   95547 certs.go:437] found cert: /Users/jenkins/minikube-integration/16899-76867/.minikube/certs/Users/jenkins/minikube-integration/16899-76867/.minikube/certs/key.pem (1675 bytes)
	I0717 16:25:14.163739   95547 certs.go:437] found cert: /Users/jenkins/minikube-integration/16899-76867/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/16899-76867/.minikube/files/etc/ssl/certs/773242.pem (1708 bytes)
	I0717 16:25:14.164307   95547 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/newest-cni-958000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0717 16:25:14.186450   95547 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/newest-cni-958000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0717 16:25:14.207957   95547 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/newest-cni-958000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0717 16:25:14.230314   95547 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/newest-cni-958000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0717 16:25:14.252824   95547 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0717 16:25:14.275629   95547 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0717 16:25:14.299110   95547 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0717 16:25:14.322509   95547 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0717 16:25:14.346431   95547 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/files/etc/ssl/certs/773242.pem --> /usr/share/ca-certificates/773242.pem (1708 bytes)
	I0717 16:25:14.370734   95547 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0717 16:25:14.393755   95547 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16899-76867/.minikube/certs/77324.pem --> /usr/share/ca-certificates/77324.pem (1338 bytes)
	I0717 16:25:14.415972   95547 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0717 16:25:14.432875   95547 ssh_runner.go:195] Run: openssl version
	I0717 16:25:14.439416   95547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/773242.pem && ln -fs /usr/share/ca-certificates/773242.pem /etc/ssl/certs/773242.pem"
	I0717 16:25:14.448847   95547 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/773242.pem
	I0717 16:25:14.453579   95547 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jul 17 22:13 /usr/share/ca-certificates/773242.pem
	I0717 16:25:14.453631   95547 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/773242.pem
	I0717 16:25:14.460543   95547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/773242.pem /etc/ssl/certs/3ec20f2e.0"
	I0717 16:25:14.469676   95547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0717 16:25:14.479277   95547 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0717 16:25:14.483930   95547 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jul 17 22:07 /usr/share/ca-certificates/minikubeCA.pem
	I0717 16:25:14.483969   95547 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0717 16:25:14.490832   95547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0717 16:25:14.500256   95547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/77324.pem && ln -fs /usr/share/ca-certificates/77324.pem /etc/ssl/certs/77324.pem"
	I0717 16:25:14.509641   95547 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/77324.pem
	I0717 16:25:14.514059   95547 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jul 17 22:13 /usr/share/ca-certificates/77324.pem
	I0717 16:25:14.514107   95547 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/77324.pem
	I0717 16:25:14.521121   95547 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/77324.pem /etc/ssl/certs/51391683.0"
	I0717 16:25:14.530596   95547 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0717 16:25:14.535035   95547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0717 16:25:14.542142   95547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0717 16:25:14.548992   95547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0717 16:25:14.556247   95547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0717 16:25:14.562969   95547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0717 16:25:14.569910   95547 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0717 16:25:14.577016   95547 kubeadm.go:404] StartCluster: {Name:newest-cni-958000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:newest-cni-958000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDo
main:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Ex
traDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 16:25:14.577138   95547 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0717 16:25:14.597277   95547 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0717 16:25:14.606836   95547 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0717 16:25:14.606848   95547 kubeadm.go:636] restartCluster start
	I0717 16:25:14.606905   95547 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0717 16:25:14.615339   95547 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0717 16:25:14.615416   95547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-958000
	I0717 16:25:14.667716   95547 kubeconfig.go:135] verify returned: extract IP: "newest-cni-958000" does not appear in /Users/jenkins/minikube-integration/16899-76867/kubeconfig
	I0717 16:25:14.667891   95547 kubeconfig.go:146] "newest-cni-958000" context is missing from /Users/jenkins/minikube-integration/16899-76867/kubeconfig - will repair!
	I0717 16:25:14.668222   95547 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16899-76867/kubeconfig: {Name:mk7ebdcff64e7ccd84e22cec95bc3c8ecbf54564 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 16:25:14.669865   95547 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0717 16:25:14.679195   95547 api_server.go:166] Checking apiserver status ...
	I0717 16:25:14.679301   95547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 16:25:14.689873   95547 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 16:25:15.190616   95547 api_server.go:166] Checking apiserver status ...
	I0717 16:25:15.190756   95547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 16:25:15.203103   95547 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 16:25:15.691413   95547 api_server.go:166] Checking apiserver status ...
	I0717 16:25:15.691529   95547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 16:25:15.703724   95547 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 16:25:16.190732   95547 api_server.go:166] Checking apiserver status ...
	I0717 16:25:16.190895   95547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 16:25:16.203130   95547 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 16:25:16.690058   95547 api_server.go:166] Checking apiserver status ...
	I0717 16:25:16.690232   95547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 16:25:16.702437   95547 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 16:25:17.192048   95547 api_server.go:166] Checking apiserver status ...
	I0717 16:25:17.192249   95547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 16:25:17.204452   95547 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 16:25:17.692056   95547 api_server.go:166] Checking apiserver status ...
	I0717 16:25:17.692243   95547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 16:25:17.704633   95547 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 16:25:18.191011   95547 api_server.go:166] Checking apiserver status ...
	I0717 16:25:18.191119   95547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 16:25:18.202854   95547 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 16:25:18.690250   95547 api_server.go:166] Checking apiserver status ...
	I0717 16:25:18.690393   95547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 16:25:18.702597   95547 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 16:25:19.192051   95547 api_server.go:166] Checking apiserver status ...
	I0717 16:25:19.192236   95547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 16:25:19.204550   95547 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 16:25:19.690110   95547 api_server.go:166] Checking apiserver status ...
	I0717 16:25:19.690222   95547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 16:25:19.702215   95547 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 16:25:20.192068   95547 api_server.go:166] Checking apiserver status ...
	I0717 16:25:20.192289   95547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 16:25:20.204520   95547 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 16:25:20.691333   95547 api_server.go:166] Checking apiserver status ...
	I0717 16:25:20.691498   95547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 16:25:20.704042   95547 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 16:25:21.192110   95547 api_server.go:166] Checking apiserver status ...
	I0717 16:25:21.192288   95547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 16:25:21.205001   95547 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 16:25:21.692106   95547 api_server.go:166] Checking apiserver status ...
	I0717 16:25:21.692271   95547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 16:25:21.704783   95547 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 16:25:22.191610   95547 api_server.go:166] Checking apiserver status ...
	I0717 16:25:22.191785   95547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 16:25:22.203938   95547 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 16:25:22.691211   95547 api_server.go:166] Checking apiserver status ...
	I0717 16:25:22.691358   95547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 16:25:22.703668   95547 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 16:25:23.191296   95547 api_server.go:166] Checking apiserver status ...
	I0717 16:25:23.191347   95547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 16:25:23.202194   95547 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 16:25:23.692152   95547 api_server.go:166] Checking apiserver status ...
	I0717 16:25:23.692362   95547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 16:25:23.704743   95547 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 16:25:24.190257   95547 api_server.go:166] Checking apiserver status ...
	I0717 16:25:24.190415   95547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0717 16:25:24.202077   95547 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0717 16:25:24.681029   95547 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0717 16:25:24.681086   95547 kubeadm.go:1128] stopping kube-system containers ...
	I0717 16:25:24.681227   95547 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0717 16:25:24.704553   95547 docker.go:462] Stopping containers: [259b5c6baf46 0ddc1541c22f 3af88245252c ff114647a8a5 9b2697662f0c 43efb788ef54 ea864920cf13 290c651ad910 a9e6d59ad7d3 8301cd465306 f2c1f869f084 b6ee749db4d7 d742b3e72f9a 03a597dde7e8 acd6f4492172 380a5c188865 8e3b99c6893d 82e26adeaf6c]
	I0717 16:25:24.704637   95547 ssh_runner.go:195] Run: docker stop 259b5c6baf46 0ddc1541c22f 3af88245252c ff114647a8a5 9b2697662f0c 43efb788ef54 ea864920cf13 290c651ad910 a9e6d59ad7d3 8301cd465306 f2c1f869f084 b6ee749db4d7 d742b3e72f9a 03a597dde7e8 acd6f4492172 380a5c188865 8e3b99c6893d 82e26adeaf6c
	I0717 16:25:24.724991   95547 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0717 16:25:24.736971   95547 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0717 16:25:24.745788   95547 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Jul 17 23:24 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Jul 17 23:24 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2007 Jul 17 23:24 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Jul 17 23:24 /etc/kubernetes/scheduler.conf
	
	I0717 16:25:24.745848   95547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0717 16:25:24.754637   95547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0717 16:25:24.763302   95547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0717 16:25:24.771915   95547 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0717 16:25:24.772024   95547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0717 16:25:24.780929   95547 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0717 16:25:24.789904   95547 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0717 16:25:24.789972   95547 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0717 16:25:24.799813   95547 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0717 16:25:24.811422   95547 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0717 16:25:24.811436   95547 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 16:25:24.862369   95547 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 16:25:25.194024   95547 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0717 16:25:25.326421   95547 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 16:25:25.380893   95547 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0717 16:25:25.472401   95547 api_server.go:52] waiting for apiserver process to appear ...
	I0717 16:25:25.472527   95547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:25:26.038481   95547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:25:26.538746   95547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:25:26.553153   95547 api_server.go:72] duration metric: took 1.080737378s to wait for apiserver process to appear ...
	I0717 16:25:26.553167   95547 api_server.go:88] waiting for apiserver healthz status ...
	I0717 16:25:26.553177   95547 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:58401/healthz ...
	I0717 16:25:26.554906   95547 api_server.go:269] stopped: https://127.0.0.1:58401/healthz: Get "https://127.0.0.1:58401/healthz": EOF
	I0717 16:25:27.055584   95547 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:58401/healthz ...
	I0717 16:25:28.991380   95547 api_server.go:279] https://127.0.0.1:58401/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0717 16:25:28.991400   95547 api_server.go:103] status: https://127.0.0.1:58401/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0717 16:25:28.991411   95547 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:58401/healthz ...
	I0717 16:25:29.045024   95547 api_server.go:279] https://127.0.0.1:58401/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	W0717 16:25:29.045047   95547 api_server.go:103] status: https://127.0.0.1:58401/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	I0717 16:25:29.055087   95547 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:58401/healthz ...
	I0717 16:25:29.063750   95547 api_server.go:279] https://127.0.0.1:58401/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0717 16:25:29.063768   95547 api_server.go:103] status: https://127.0.0.1:58401/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0717 16:25:29.555057   95547 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:58401/healthz ...
	I0717 16:25:29.560348   95547 api_server.go:279] https://127.0.0.1:58401/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0717 16:25:29.560366   95547 api_server.go:103] status: https://127.0.0.1:58401/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0717 16:25:30.056858   95547 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:58401/healthz ...
	I0717 16:25:30.063495   95547 api_server.go:279] https://127.0.0.1:58401/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0717 16:25:30.063517   95547 api_server.go:103] status: https://127.0.0.1:58401/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0717 16:25:30.556898   95547 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:58401/healthz ...
	I0717 16:25:30.565620   95547 api_server.go:279] https://127.0.0.1:58401/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0717 16:25:30.565645   95547 api_server.go:103] status: https://127.0.0.1:58401/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0717 16:25:31.055046   95547 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:58401/healthz ...
	I0717 16:25:31.132299   95547 api_server.go:279] https://127.0.0.1:58401/healthz returned 200:
	ok
	I0717 16:25:31.141094   95547 api_server.go:141] control plane version: v1.27.3
	I0717 16:25:31.141125   95547 api_server.go:131] duration metric: took 4.587880555s to wait for apiserver health ...
	I0717 16:25:31.141151   95547 cni.go:84] Creating CNI manager for ""
	I0717 16:25:31.141159   95547 cni.go:149] "docker" driver + "docker" runtime found, recommending kindnet
	I0717 16:25:31.162303   95547 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0717 16:25:31.199700   95547 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0717 16:25:31.207311   95547 cni.go:188] applying CNI manifest using /var/lib/minikube/binaries/v1.27.3/kubectl ...
	I0717 16:25:31.207323   95547 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0717 16:25:31.225572   95547 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0717 16:25:31.874913   95547 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 16:25:31.882378   95547 system_pods.go:59] 9 kube-system pods found
	I0717 16:25:31.882403   95547 system_pods.go:61] "coredns-5d78c9869d-78dd9" [de57e5a7-c7e0-4452-85ff-1a3b1d22f072] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0717 16:25:31.882412   95547 system_pods.go:61] "etcd-newest-cni-958000" [d26f26b2-e584-4db8-b787-9221da3ae2c2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0717 16:25:31.882419   95547 system_pods.go:61] "kindnet-2qwmv" [ef5de39d-c3b1-4c33-a780-1c8b7f590356] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0717 16:25:31.882438   95547 system_pods.go:61] "kube-apiserver-newest-cni-958000" [ef52f413-7df8-4a49-890d-77d96f4b6fe1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0717 16:25:31.882444   95547 system_pods.go:61] "kube-controller-manager-newest-cni-958000" [c43be546-ca87-42c7-89f4-8b4d6bf0a065] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0717 16:25:31.882450   95547 system_pods.go:61] "kube-proxy-vgrgn" [33d990c3-b0c6-4bd3-a8c9-a97793a4d90a] Running
	I0717 16:25:31.882455   95547 system_pods.go:61] "kube-scheduler-newest-cni-958000" [fcbf3fd5-35ae-4169-9731-7efda86a550b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0717 16:25:31.882461   95547 system_pods.go:61] "metrics-server-74d5c6b9c-v6xx7" [758088d8-d032-45f2-8373-0d46b877596f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 16:25:31.882466   95547 system_pods.go:61] "storage-provisioner" [08822a3c-72fd-4c06-abfd-98dcb808d89c] Running
	I0717 16:25:31.882470   95547 system_pods.go:74] duration metric: took 7.544921ms to wait for pod list to return data ...
	I0717 16:25:31.882479   95547 node_conditions.go:102] verifying NodePressure condition ...
	I0717 16:25:31.938022   95547 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I0717 16:25:31.938036   95547 node_conditions.go:123] node cpu capacity is 6
	I0717 16:25:31.938085   95547 node_conditions.go:105] duration metric: took 55.598559ms to run NodePressure ...
	I0717 16:25:31.938111   95547 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0717 16:25:32.257733   95547 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0717 16:25:32.267463   95547 ops.go:34] apiserver oom_adj: -16
	I0717 16:25:32.267475   95547 kubeadm.go:640] restartCluster took 17.660339955s
	I0717 16:25:32.267487   95547 kubeadm.go:406] StartCluster complete in 17.690198187s
	I0717 16:25:32.267505   95547 settings.go:142] acquiring lock: {Name:mkcd1c9566f766bc2df0b9039d6e9d173f23ce3a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 16:25:32.267594   95547 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/16899-76867/kubeconfig
	I0717 16:25:32.268218   95547 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16899-76867/kubeconfig: {Name:mk7ebdcff64e7ccd84e22cec95bc3c8ecbf54564 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 16:25:32.268476   95547 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0717 16:25:32.268497   95547 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0717 16:25:32.268630   95547 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-958000"
	I0717 16:25:32.268647   95547 addons.go:69] Setting dashboard=true in profile "newest-cni-958000"
	I0717 16:25:32.268658   95547 addons.go:69] Setting default-storageclass=true in profile "newest-cni-958000"
	I0717 16:25:32.268666   95547 addons.go:231] Setting addon storage-provisioner=true in "newest-cni-958000"
	W0717 16:25:32.268674   95547 addons.go:240] addon storage-provisioner should already be in state true
	I0717 16:25:32.268674   95547 addons.go:231] Setting addon dashboard=true in "newest-cni-958000"
	I0717 16:25:32.268679   95547 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-958000"
	W0717 16:25:32.268686   95547 addons.go:240] addon dashboard should already be in state true
	I0717 16:25:32.268654   95547 addons.go:69] Setting metrics-server=true in profile "newest-cni-958000"
	I0717 16:25:32.268722   95547 addons.go:231] Setting addon metrics-server=true in "newest-cni-958000"
	I0717 16:25:32.268731   95547 host.go:66] Checking if "newest-cni-958000" exists ...
	I0717 16:25:32.268741   95547 host.go:66] Checking if "newest-cni-958000" exists ...
	W0717 16:25:32.268734   95547 addons.go:240] addon metrics-server should already be in state true
	I0717 16:25:32.268795   95547 config.go:182] Loaded profile config "newest-cni-958000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0717 16:25:32.268822   95547 host.go:66] Checking if "newest-cni-958000" exists ...
	I0717 16:25:32.269097   95547 cli_runner.go:164] Run: docker container inspect newest-cni-958000 --format={{.State.Status}}
	I0717 16:25:32.269199   95547 cli_runner.go:164] Run: docker container inspect newest-cni-958000 --format={{.State.Status}}
	I0717 16:25:32.269283   95547 cli_runner.go:164] Run: docker container inspect newest-cni-958000 --format={{.State.Status}}
	I0717 16:25:32.269349   95547 cli_runner.go:164] Run: docker container inspect newest-cni-958000 --format={{.State.Status}}
	I0717 16:25:32.281119   95547 kapi.go:248] "coredns" deployment in "kube-system" namespace and "newest-cni-958000" context rescaled to 1 replicas
	I0717 16:25:32.281190   95547 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0717 16:25:32.304896   95547 out.go:177] * Verifying Kubernetes components...
	I0717 16:25:32.345271   95547 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 16:25:32.380257   95547 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0717 16:25:32.380262   95547 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0717 16:25:32.380221   95547 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0717 16:25:32.368883   95547 addons.go:231] Setting addon default-storageclass=true in "newest-cni-958000"
	I0717 16:25:32.401301   95547 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0717 16:25:32.401371   95547 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	W0717 16:25:32.422071   95547 addons.go:240] addon default-storageclass should already be in state true
	I0717 16:25:32.422071   95547 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0717 16:25:32.422111   95547 host.go:66] Checking if "newest-cni-958000" exists ...
	I0717 16:25:32.443317   95547 addons.go:423] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0717 16:25:32.443331   95547 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0717 16:25:32.422116   95547 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0717 16:25:32.443356   95547 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0717 16:25:32.443397   95547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-958000
	I0717 16:25:32.443405   95547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-958000
	I0717 16:25:32.443460   95547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-958000
	I0717 16:25:32.447630   95547 cli_runner.go:164] Run: docker container inspect newest-cni-958000 --format={{.State.Status}}
	I0717 16:25:32.457995   95547 start.go:874] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0717 16:25:32.458158   95547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-958000
	I0717 16:25:32.526486   95547 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58402 SSHKeyPath:/Users/jenkins/minikube-integration/16899-76867/.minikube/machines/newest-cni-958000/id_rsa Username:docker}
	I0717 16:25:32.526574   95547 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58402 SSHKeyPath:/Users/jenkins/minikube-integration/16899-76867/.minikube/machines/newest-cni-958000/id_rsa Username:docker}
	I0717 16:25:32.528382   95547 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58402 SSHKeyPath:/Users/jenkins/minikube-integration/16899-76867/.minikube/machines/newest-cni-958000/id_rsa Username:docker}
	I0717 16:25:32.530005   95547 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0717 16:25:32.530023   95547 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0717 16:25:32.530137   95547 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-958000
	I0717 16:25:32.537419   95547 api_server.go:52] waiting for apiserver process to appear ...
	I0717 16:25:32.537522   95547 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 16:25:32.558771   95547 api_server.go:72] duration metric: took 277.518719ms to wait for apiserver process to appear ...
	I0717 16:25:32.558796   95547 api_server.go:88] waiting for apiserver healthz status ...
	I0717 16:25:32.558819   95547 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:58401/healthz ...
	I0717 16:25:32.568638   95547 api_server.go:279] https://127.0.0.1:58401/healthz returned 200:
	ok
	I0717 16:25:32.571219   95547 api_server.go:141] control plane version: v1.27.3
	I0717 16:25:32.571237   95547 api_server.go:131] duration metric: took 12.431888ms to wait for apiserver health ...
	I0717 16:25:32.571246   95547 system_pods.go:43] waiting for kube-system pods to appear ...
	I0717 16:25:32.579738   95547 system_pods.go:59] 9 kube-system pods found
	I0717 16:25:32.579759   95547 system_pods.go:61] "coredns-5d78c9869d-78dd9" [de57e5a7-c7e0-4452-85ff-1a3b1d22f072] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0717 16:25:32.579770   95547 system_pods.go:61] "etcd-newest-cni-958000" [d26f26b2-e584-4db8-b787-9221da3ae2c2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0717 16:25:32.579782   95547 system_pods.go:61] "kindnet-2qwmv" [ef5de39d-c3b1-4c33-a780-1c8b7f590356] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0717 16:25:32.579799   95547 system_pods.go:61] "kube-apiserver-newest-cni-958000" [ef52f413-7df8-4a49-890d-77d96f4b6fe1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0717 16:25:32.579807   95547 system_pods.go:61] "kube-controller-manager-newest-cni-958000" [c43be546-ca87-42c7-89f4-8b4d6bf0a065] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0717 16:25:32.579818   95547 system_pods.go:61] "kube-proxy-vgrgn" [33d990c3-b0c6-4bd3-a8c9-a97793a4d90a] Running
	I0717 16:25:32.579827   95547 system_pods.go:61] "kube-scheduler-newest-cni-958000" [fcbf3fd5-35ae-4169-9731-7efda86a550b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0717 16:25:32.579851   95547 system_pods.go:61] "metrics-server-74d5c6b9c-v6xx7" [758088d8-d032-45f2-8373-0d46b877596f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0717 16:25:32.579865   95547 system_pods.go:61] "storage-provisioner" [08822a3c-72fd-4c06-abfd-98dcb808d89c] Running
	I0717 16:25:32.579875   95547 system_pods.go:74] duration metric: took 8.621395ms to wait for pod list to return data ...
	I0717 16:25:32.579884   95547 default_sa.go:34] waiting for default service account to be created ...
	I0717 16:25:32.583269   95547 default_sa.go:45] found service account: "default"
	I0717 16:25:32.583283   95547 default_sa.go:55] duration metric: took 3.39363ms for default service account to be created ...
	I0717 16:25:32.583292   95547 kubeadm.go:581] duration metric: took 302.051979ms to wait for : map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] ...
	I0717 16:25:32.583303   95547 node_conditions.go:102] verifying NodePressure condition ...
	I0717 16:25:32.598048   95547 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58402 SSHKeyPath:/Users/jenkins/minikube-integration/16899-76867/.minikube/machines/newest-cni-958000/id_rsa Username:docker}
	I0717 16:25:32.641217   95547 node_conditions.go:122] node storage ephemeral capacity is 61202244Ki
	I0717 16:25:32.641232   95547 node_conditions.go:123] node cpu capacity is 6
	I0717 16:25:32.641245   95547 node_conditions.go:105] duration metric: took 57.936871ms to run NodePressure ...
	I0717 16:25:32.641255   95547 start.go:228] waiting for startup goroutines ...
	I0717 16:25:32.753584   95547 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0717 16:25:32.754402   95547 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0717 16:25:32.754415   95547 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0717 16:25:32.756489   95547 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0717 16:25:32.758325   95547 addons.go:423] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0717 16:25:32.758365   95547 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0717 16:25:32.839099   95547 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0717 16:25:32.839121   95547 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0717 16:25:32.841403   95547 addons.go:423] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0717 16:25:32.841417   95547 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0717 16:25:32.866710   95547 addons.go:423] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0717 16:25:32.866725   95547 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0717 16:25:32.868340   95547 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 16:25:32.868355   95547 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0717 16:25:32.948686   95547 addons.go:423] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0717 16:25:32.948700   95547 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0717 16:25:32.949624   95547 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0717 16:25:32.974331   95547 addons.go:423] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0717 16:25:32.974355   95547 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0717 16:25:33.069706   95547 addons.go:423] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0717 16:25:33.069725   95547 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0717 16:25:33.169084   95547 addons.go:423] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0717 16:25:33.169104   95547 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0717 16:25:33.250984   95547 addons.go:423] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0717 16:25:33.251007   95547 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0717 16:25:33.338269   95547 addons.go:423] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0717 16:25:33.338298   95547 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0717 16:25:33.367104   95547 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0717 16:25:33.894705   95547 addons.go:467] Verifying addon metrics-server=true in "newest-cni-958000"
	I0717 16:25:34.484789   95547 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.117630675s)
	I0717 16:25:34.506638   95547 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-958000 addons enable metrics-server	
	
	
	I0717 16:25:34.526957   95547 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0717 16:25:34.548482   95547 addons.go:502] enable addons completed in 2.279938211s: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0717 16:25:34.548518   95547 start.go:233] waiting for cluster config update ...
	I0717 16:25:34.548529   95547 start.go:242] writing updated cluster config ...
	I0717 16:25:34.569980   95547 ssh_runner.go:195] Run: rm -f paused
	I0717 16:25:34.613358   95547 start.go:578] kubectl: 1.27.2, cluster: 1.27.3 (minor skew: 0)
	I0717 16:25:34.634859   95547 out.go:177] * Done! kubectl is now configured to use "newest-cni-958000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* Jul 17 23:25:34 newest-cni-958000 cri-dockerd[978]: time="2023-07-17T23:25:34Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"metrics-server-74d5c6b9c-v6xx7_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Jul 17 23:25:34 newest-cni-958000 cri-dockerd[978]: time="2023-07-17T23:25:34Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"metrics-server-74d5c6b9c-v6xx7_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Jul 17 23:25:35 newest-cni-958000 dockerd[754]: time="2023-07-17T23:25:35.038565179Z" level=info msg="ignoring event" container=da079162cd3af52f15327e72540d64b13c906d324cd215b74c726883038c6bce module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 17 23:25:35 newest-cni-958000 cri-dockerd[978]: time="2023-07-17T23:25:35Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/95ed576fd291635af1bd974328dca78efdedfa42f535677c78e2875aeaa88004/resolv.conf as [nameserver 10.96.0.10 search kube-system.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jul 17 23:25:35 newest-cni-958000 cri-dockerd[978]: time="2023-07-17T23:25:35Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"metrics-server-74d5c6b9c-v6xx7_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Jul 17 23:25:36 newest-cni-958000 cri-dockerd[978]: time="2023-07-17T23:25:36Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"metrics-server-74d5c6b9c-v6xx7_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Jul 17 23:25:36 newest-cni-958000 dockerd[754]: time="2023-07-17T23:25:36.159975768Z" level=info msg="ignoring event" container=95ed576fd291635af1bd974328dca78efdedfa42f535677c78e2875aeaa88004 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 17 23:25:36 newest-cni-958000 cri-dockerd[978]: W0717 23:25:36.244512     978 logging.go:59] [core] [Server #1] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	Jul 17 23:25:36 newest-cni-958000 cri-dockerd[978]: W0717 23:25:36.659738     978 logging.go:59] [core] [Server #1] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	Jul 17 23:26:00 newest-cni-958000 dockerd[754]: time="2023-07-17T23:26:00.871296003Z" level=info msg="Container failed to exit within 30s of signal 15 - using the force" container=3d53869f74c09680a1ae527a126dc3d12b6fba6768e217cd85bdea82b038d681
	Jul 17 23:26:00 newest-cni-958000 dockerd[754]: time="2023-07-17T23:26:00.891228987Z" level=warning msg="Cannot unpause container 3d53869f74c09680a1ae527a126dc3d12b6fba6768e217cd85bdea82b038d681: cannot resume a stopped container: unknown"
	Jul 17 23:26:00 newest-cni-958000 dockerd[754]: time="2023-07-17T23:26:00.903764341Z" level=info msg="ignoring event" container=3d53869f74c09680a1ae527a126dc3d12b6fba6768e217cd85bdea82b038d681 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 17 23:26:00 newest-cni-958000 cri-dockerd[978]: W0717 23:26:00.917334     978 logging.go:59] [core] [Server #1] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	Jul 17 23:26:09 newest-cni-958000 dockerd[754]: time="2023-07-17T23:26:09.267020140Z" level=info msg="ignoring event" container=2d0fd704caeb3026651e755beddb6a05f2d3a2fad2e8b9588b43745c0c25d924 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 17 23:26:09 newest-cni-958000 cri-dockerd[978]: time="2023-07-17T23:26:09Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.42.0.0/24,},}"
	Jul 17 23:26:10 newest-cni-958000 cri-dockerd[978]: time="2023-07-17T23:26:10Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-5d78c9869d-78dd9_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Jul 17 23:26:11 newest-cni-958000 cri-dockerd[978]: time="2023-07-17T23:26:11Z" level=info msg="Both sandbox container and checkpoint could not be found with id \"da079162cd3af52f15327e72540d64b13c906d324cd215b74c726883038c6bce\". Proceed without further sandbox information."
	Jul 17 23:26:11 newest-cni-958000 cri-dockerd[978]: time="2023-07-17T23:26:11Z" level=info msg="Both sandbox container and checkpoint could not be found with id \"9032b1942e842d20f2ded9cad0fbb1c3adf2449dbd0384be221a4495b0f9ab3c\". Proceed without further sandbox information."
	Jul 17 23:26:11 newest-cni-958000 cri-dockerd[978]: time="2023-07-17T23:26:11Z" level=info msg="Both sandbox container and checkpoint could not be found with id \"259b5c6baf46b357c1f03685b9bb76a50ec59bb380749ecd052e5dddb7ed72b9\". Proceed without further sandbox information."
	Jul 17 23:26:11 newest-cni-958000 dockerd[754]: time="2023-07-17T23:26:11.158519987Z" level=info msg="ignoring event" container=e56a04d8c11f13393d19806ca37bda316d020a73e9344f37eafb77cad81b7c76 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 17 23:26:11 newest-cni-958000 cri-dockerd[978]: time="2023-07-17T23:26:11Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/45d6dca7e44a993a0acc113e844ea29383a5a51be185cae833102d5aaab45237/resolv.conf as [nameserver 10.96.0.10 search kubernetes-dashboard.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jul 17 23:26:11 newest-cni-958000 cri-dockerd[978]: time="2023-07-17T23:26:11Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/86e1c46ff1c55b227d36ce3eb60a8bdb0e4260aa64b155545c25759bdeabee70/resolv.conf as [nameserver 10.96.0.10 search kubernetes-dashboard.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jul 17 23:26:11 newest-cni-958000 cri-dockerd[978]: time="2023-07-17T23:26:11Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/910479620edf8dabf7a625d485714683f2ea088239c702e2f9966eff21a479a7/resolv.conf as [nameserver 10.96.0.10 search kube-system.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jul 17 23:26:11 newest-cni-958000 cri-dockerd[978]: time="2023-07-17T23:26:11Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/c7c1a6a950e6c77f46b8fad421f8b39fc702edf250db5ac334687cab0a724d0c/resolv.conf as [nameserver 192.168.65.254 options ndots:0]"
	Jul 17 23:26:11 newest-cni-958000 dockerd[754]: time="2023-07-17T23:26:11.974035565Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                      CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	43a70c51d79c6       6e38f40d628db                                                                              6 seconds ago        Running             storage-provisioner       2                   791e47bc9be49       storage-provisioner
	86d37656adc59       ead0a4a53df89                                                                              6 seconds ago        Running             coredns                   2                   c7c1a6a950e6c       coredns-5d78c9869d-78dd9
	67cb39ad20197       kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974   44 seconds ago       Running             kindnet-cni               0                   83660f73fc15e       kindnet-2qwmv
	3d53869f74c09       ead0a4a53df89                                                                              47 seconds ago       Exited              coredns                   1                   e56a04d8c11f1       coredns-5d78c9869d-78dd9
	2d0fd704caeb3       6e38f40d628db                                                                              47 seconds ago       Exited              storage-provisioner       1                   791e47bc9be49       storage-provisioner
	e0b6a3ada1e11       5780543258cf0                                                                              48 seconds ago       Running             kube-proxy                1                   3903e0220a290       kube-proxy-vgrgn
	777fc3ba407a6       86b6af7dd652c                                                                              51 seconds ago       Running             etcd                      1                   1989fe329f866       etcd-newest-cni-958000
	200d42a5c64f0       41697ceeb70b3                                                                              51 seconds ago       Running             kube-scheduler            1                   7edfd46cd1675       kube-scheduler-newest-cni-958000
	8a15f393d56ff       7cffc01dba0e1                                                                              51 seconds ago       Running             kube-controller-manager   1                   dde7a0b9d7a65       kube-controller-manager-newest-cni-958000
	7c126a6bfb9fe       08a0c939e61b7                                                                              51 seconds ago       Running             kube-apiserver            1                   47fdf1ecbd4e9       kube-apiserver-newest-cni-958000
	9b2697662f0c9       ead0a4a53df89                                                                              About a minute ago   Exited              coredns                   0                   ea864920cf134       coredns-5d78c9869d-6l7gl
	290c651ad9103       5780543258cf0                                                                              About a minute ago   Exited              kube-proxy                0                   a9e6d59ad7d37       kube-proxy-vgrgn
	f2c1f869f084f       86b6af7dd652c                                                                              About a minute ago   Exited              etcd                      0                   82e26adeaf6c6       etcd-newest-cni-958000
	b6ee749db4d7a       08a0c939e61b7                                                                              About a minute ago   Exited              kube-apiserver            0                   8e3b99c6893dd       kube-apiserver-newest-cni-958000
	d742b3e72f9ac       7cffc01dba0e1                                                                              About a minute ago   Exited              kube-controller-manager   0                   380a5c188865d       kube-controller-manager-newest-cni-958000
	03a597dde7e87       41697ceeb70b3                                                                              About a minute ago   Exited              kube-scheduler            0                   acd6f44921724       kube-scheduler-newest-cni-958000
	
	* 
	* ==> coredns [3d53869f74c0] <==
	* [INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = f869070685748660180df1b7a47d58cdafcf2f368266578c062d1151dc2c900964aecc5975e8882e6de6fdfb6460463e30ebfaad2ec8f0c3c6436f80225b3b5b
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> coredns [86d37656adc5] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = f869070685748660180df1b7a47d58cdafcf2f368266578c062d1151dc2c900964aecc5975e8882e6de6fdfb6460463e30ebfaad2ec8f0c3c6436f80225b3b5b
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:33190 - 18215 "HINFO IN 2316843258821924969.4346406040297016947. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.008529913s
	
	* 
	* ==> coredns [9b2697662f0c] <==
	* [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = f869070685748660180df1b7a47d58cdafcf2f368266578c062d1151dc2c900964aecc5975e8882e6de6fdfb6460463e30ebfaad2ec8f0c3c6436f80225b3b5b
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] plugin/health: Going into lameduck mode for 5s
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: network is unreachable
	[INFO] 127.0.0.1:33625 - 1270 "HINFO IN 75311783510755995.4415874585306800306. udp 55 false 512" - - 0 5.000053693s
	[ERROR] plugin/errors: 2 75311783510755995.4415874585306800306. HINFO: dial udp 192.168.65.254:53: connect: network is unreachable
	
	* 
	* ==> describe nodes <==
	* Name:               newest-cni-958000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-958000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b76e7e219387ed29a8027b03764cb35e04d80ac8
	                    minikube.k8s.io/name=newest-cni-958000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_07_17T16_24_39_0700
	                    minikube.k8s.io/version=v1.31.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 17 Jul 2023 23:24:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-958000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 17 Jul 2023 23:26:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 17 Jul 2023 23:26:09 +0000   Mon, 17 Jul 2023 23:24:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 17 Jul 2023 23:26:09 +0000   Mon, 17 Jul 2023 23:24:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 17 Jul 2023 23:26:09 +0000   Mon, 17 Jul 2023 23:24:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 17 Jul 2023 23:26:09 +0000   Mon, 17 Jul 2023 23:24:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.67.2
	  Hostname:    newest-cni-958000
	Capacity:
	  cpu:                6
	  ephemeral-storage:  61202244Ki
	  hugepages-2Mi:      0
	  memory:             6085660Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  61202244Ki
	  hugepages-2Mi:      0
	  memory:             6085660Ki
	  pods:               110
	System Info:
	  Machine ID:                 322c480d41584a4a8b6e62cada18398e
	  System UUID:                322c480d41584a4a8b6e62cada18398e
	  Boot ID:                    39ad526a-f9da-4327-9b2d-183cb5a85afa
	  Kernel Version:             5.15.49-linuxkit-pr
	  OS Image:                   Ubuntu 22.04.2 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://24.0.4
	  Kubelet Version:            v1.27.3
	  Kube-Proxy Version:         v1.27.3
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5d78c9869d-78dd9                      100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (2%!)(MISSING)     87s
	  kube-system                 etcd-newest-cni-958000                        100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         100s
	  kube-system                 kindnet-2qwmv                                 100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      88s
	  kube-system                 kube-apiserver-newest-cni-958000              250m (4%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         100s
	  kube-system                 kube-controller-manager-newest-cni-958000     200m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         100s
	  kube-system                 kube-proxy-vgrgn                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         88s
	  kube-system                 kube-scheduler-newest-cni-958000              100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         100s
	  kube-system                 metrics-server-74d5c6b9c-v6xx7                100m (1%!)(MISSING)     0 (0%!)(MISSING)      200Mi (3%!)(MISSING)       0 (0%!)(MISSING)         85s
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         86s
	  kubernetes-dashboard        dashboard-metrics-scraper-59c665bc77-z5fjd    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9s
	  kubernetes-dashboard        kubernetes-dashboard-5c5cfc8747-b8qmr         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (15%!)(MISSING)  100m (1%!)(MISSING)
	  memory             420Mi (7%!)(MISSING)  220Mi (3%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 47s                kube-proxy       
	  Normal  Starting                 86s                kube-proxy       
	  Normal  Starting                 100s               kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  100s               kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  100s               kubelet          Node newest-cni-958000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    100s               kubelet          Node newest-cni-958000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     100s               kubelet          Node newest-cni-958000 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           88s                node-controller  Node newest-cni-958000 event: Registered Node newest-cni-958000 in Controller
	  Normal  Starting                 53s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  53s (x8 over 53s)  kubelet          Node newest-cni-958000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    53s (x8 over 53s)  kubelet          Node newest-cni-958000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     53s (x7 over 53s)  kubelet          Node newest-cni-958000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  53s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           9s                 node-controller  Node newest-cni-958000 event: Registered Node newest-cni-958000 in Controller
	  Normal  Starting                 9s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9s                 kubelet          Node newest-cni-958000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9s                 kubelet          Node newest-cni-958000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9s                 kubelet          Node newest-cni-958000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9s                 kubelet          Updated Node Allocatable limit across pods
	
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> etcd [777fc3ba407a] <==
	* {"level":"info","ts":"2023-07-17T23:25:26.462Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-17T23:25:26.462Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-17T23:25:26.464Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-07-17T23:25:26.464Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"8688e899f7831fc7","initial-advertise-peer-urls":["https://192.168.67.2:2380"],"listen-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.67.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-07-17T23:25:26.464Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-07-17T23:25:26.464Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2023-07-17T23:25:26.464Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2023-07-17T23:25:28.051Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 is starting a new election at term 2"}
	{"level":"info","ts":"2023-07-17T23:25:28.051Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 2"}
	{"level":"info","ts":"2023-07-17T23:25:28.051Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2023-07-17T23:25:28.051Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 3"}
	{"level":"info","ts":"2023-07-17T23:25:28.051Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 3"}
	{"level":"info","ts":"2023-07-17T23:25:28.051Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 3"}
	{"level":"info","ts":"2023-07-17T23:25:28.051Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 3"}
	{"level":"info","ts":"2023-07-17T23:25:28.054Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:newest-cni-958000 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2023-07-17T23:25:28.054Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-07-17T23:25:28.054Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-07-17T23:25:28.054Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-07-17T23:25:28.054Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-07-17T23:25:28.055Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.67.2:2379"}
	{"level":"info","ts":"2023-07-17T23:25:28.055Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	WARNING: 2023/07/17 23:25:36 [core] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"info","ts":"2023-07-17T23:26:11.598Z","caller":"traceutil/trace.go:171","msg":"trace[1659958405] transaction","detail":"{read_only:false; response_revision:600; number_of_response:1; }","duration":"128.203479ms","start":"2023-07-17T23:26:11.470Z","end":"2023-07-17T23:26:11.598Z","steps":["trace[1659958405] 'process raft request'  (duration: 127.968235ms)"],"step_count":1}
	{"level":"info","ts":"2023-07-17T23:26:13.172Z","caller":"traceutil/trace.go:171","msg":"trace[389432424] transaction","detail":"{read_only:false; response_revision:616; number_of_response:1; }","duration":"111.388732ms","start":"2023-07-17T23:26:13.061Z","end":"2023-07-17T23:26:13.172Z","steps":["trace[389432424] 'process raft request'  (duration: 111.349537ms)"],"step_count":1}
	{"level":"info","ts":"2023-07-17T23:26:13.173Z","caller":"traceutil/trace.go:171","msg":"trace[45499688] transaction","detail":"{read_only:false; response_revision:615; number_of_response:1; }","duration":"114.525682ms","start":"2023-07-17T23:26:13.058Z","end":"2023-07-17T23:26:13.173Z","steps":["trace[45499688] 'process raft request'  (duration: 112.404717ms)"],"step_count":1}
	
	* 
	* ==> etcd [f2c1f869f084] <==
	* {"level":"info","ts":"2023-07-17T23:24:34.148Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-07-17T23:24:34.148Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 1"}
	{"level":"info","ts":"2023-07-17T23:24:34.148Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 2"}
	{"level":"info","ts":"2023-07-17T23:24:34.148Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2023-07-17T23:24:34.148Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 2"}
	{"level":"info","ts":"2023-07-17T23:24:34.148Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2023-07-17T23:24:34.150Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:newest-cni-958000 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2023-07-17T23:24:34.150Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-07-17T23:24:34.150Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-07-17T23:24:34.150Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-07-17T23:24:34.150Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-07-17T23:24:34.150Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-17T23:24:34.151Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-17T23:24:34.151Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-17T23:24:34.151Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-17T23:24:34.151Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-07-17T23:24:34.151Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.67.2:2379"}
	{"level":"info","ts":"2023-07-17T23:24:51.137Z","caller":"traceutil/trace.go:171","msg":"trace[602594232] transaction","detail":"{read_only:false; response_revision:341; number_of_response:1; }","duration":"101.718196ms","start":"2023-07-17T23:24:51.035Z","end":"2023-07-17T23:24:51.137Z","steps":["trace[602594232] 'process raft request'  (duration: 99.917385ms)"],"step_count":1}
	{"level":"info","ts":"2023-07-17T23:24:51.137Z","caller":"traceutil/trace.go:171","msg":"trace[421865934] transaction","detail":"{read_only:false; response_revision:342; number_of_response:1; }","duration":"100.77849ms","start":"2023-07-17T23:24:51.036Z","end":"2023-07-17T23:24:51.137Z","steps":["trace[421865934] 'process raft request'  (duration: 100.32683ms)"],"step_count":1}
	{"level":"info","ts":"2023-07-17T23:24:54.565Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2023-07-17T23:24:54.565Z","caller":"embed/etcd.go:373","msg":"closing etcd server","name":"newest-cni-958000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"]}
	{"level":"info","ts":"2023-07-17T23:24:54.646Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"8688e899f7831fc7","current-leader-member-id":"8688e899f7831fc7"}
	{"level":"info","ts":"2023-07-17T23:24:54.647Z","caller":"embed/etcd.go:568","msg":"stopping serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2023-07-17T23:24:54.648Z","caller":"embed/etcd.go:573","msg":"stopped serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2023-07-17T23:24:54.648Z","caller":"embed/etcd.go:375","msg":"closed etcd server","name":"newest-cni-958000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"]}
	
	* 
	* ==> kernel <==
	*  23:26:18 up  7:25,  0 users,  load average: 1.78, 1.25, 1.23
	Linux newest-cni-958000 5.15.49-linuxkit-pr #1 SMP Thu May 25 07:17:40 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.2 LTS"
	
	* 
	* ==> kindnet [67cb39ad2019] <==
	* I0717 23:25:33.439688       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0717 23:25:33.439762       1 main.go:107] hostIP = 192.168.67.2
	podIP = 192.168.67.2
	I0717 23:25:33.439899       1 main.go:116] setting mtu 65535 for CNI 
	I0717 23:25:33.439938       1 main.go:146] kindnetd IP family: "ipv4"
	I0717 23:25:33.439954       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0717 23:25:34.037155       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0717 23:25:34.037196       1 main.go:227] handling current node
	I0717 23:26:09.246563       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0717 23:26:09.246587       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [7c126a6bfb9f] <==
	* W0717 23:25:30.136097       1 handler_proxy.go:100] no RequestInfo found in the context
	E0717 23:25:30.136132       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0717 23:25:30.136140       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0717 23:25:30.136160       1 handler_proxy.go:100] no RequestInfo found in the context
	E0717 23:25:30.136216       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0717 23:25:30.137209       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0717 23:25:31.868117       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0717 23:25:32.077892       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0717 23:25:32.141060       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0717 23:25:32.143582       1 handler_discovery.go:325] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.102.206.80:443: connect: connection refused
	I0717 23:25:32.143596       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0717 23:25:32.239473       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0717 23:25:32.247517       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0717 23:25:34.337572       1 controller.go:624] quota admission added evaluator for: namespaces
	I0717 23:25:34.466480       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs=map[IPv4:10.111.14.12]
	I0717 23:25:34.479074       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs=map[IPv4:10.105.157.73]
	E0717 23:25:36.093193       1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout
	E0717 23:25:36.093240       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0717 23:25:36.093275       1 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 9.867µs, panicked: false, err: context canceled, panic-reason: <nil>
	E0717 23:25:36.094570       1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0717 23:25:36.095810       1 timeout.go:142] post-timeout activity - time-elapsed: 2.660057ms, PATCH "/api/v1/namespaces/kube-system/events/metrics-server-74d5c6b9c-v6xx7.1772cb4a33b59d8d" result: <nil>
	I0717 23:26:09.450927       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0717 23:26:09.543764       1 controller.go:624] quota admission added evaluator for: endpoints
	I0717 23:26:09.580601       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	* 
	* ==> kube-apiserver [b6ee749db4d7] <==
	* }. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 23:24:55.574232       1 logging.go:59] [core] [Channel #124 SubChannel #125] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 23:24:55.574248       1 logging.go:59] [core] [Channel #145 SubChannel #146] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W0717 23:24:55.636582       1 logging.go:59] [core] [Channel #37 SubChannel #38] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	
	* 
	* ==> kube-controller-manager [8a15f393d56f] <==
	* I0717 23:26:09.536387       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-59c665bc77" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-59c665bc77-z5fjd"
	I0717 23:26:09.540341       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"newest-cni-958000\" does not exist"
	I0717 23:26:09.560083       1 shared_informer.go:318] Caches are synced for node
	I0717 23:26:09.560322       1 range_allocator.go:174] "Sending events to api server"
	I0717 23:26:09.560453       1 range_allocator.go:178] "Starting range CIDR allocator"
	I0717 23:26:09.560463       1 shared_informer.go:311] Waiting for caches to sync for cidrallocator
	I0717 23:26:09.560468       1 shared_informer.go:318] Caches are synced for cidrallocator
	I0717 23:26:09.561486       1 shared_informer.go:318] Caches are synced for GC
	I0717 23:26:09.563788       1 shared_informer.go:318] Caches are synced for daemon sets
	I0717 23:26:09.568378       1 shared_informer.go:318] Caches are synced for attach detach
	I0717 23:26:09.573897       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I0717 23:26:09.580566       1 shared_informer.go:318] Caches are synced for persistent volume
	I0717 23:26:09.638659       1 shared_informer.go:318] Caches are synced for TTL
	I0717 23:26:09.638716       1 shared_informer.go:318] Caches are synced for taint
	I0717 23:26:09.639022       1 node_lifecycle_controller.go:1223] "Initializing eviction metric for zone" zone=""
	I0717 23:26:09.639133       1 node_lifecycle_controller.go:875] "Missing timestamp for Node. Assuming now as a timestamp" node="newest-cni-958000"
	I0717 23:26:09.639739       1 node_lifecycle_controller.go:1069] "Controller detected that zone is now in new state" zone="" newState=Normal
	I0717 23:26:09.639827       1 taint_manager.go:206] "Starting NoExecuteTaintManager"
	I0717 23:26:09.639979       1 taint_manager.go:211] "Sending events to api server"
	I0717 23:26:09.639996       1 event.go:307] "Event occurred" object="newest-cni-958000" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node newest-cni-958000 event: Registered Node newest-cni-958000 in Controller"
	I0717 23:26:09.658380       1 shared_informer.go:318] Caches are synced for resource quota
	I0717 23:26:09.661203       1 shared_informer.go:318] Caches are synced for resource quota
	I0717 23:26:09.977039       1 shared_informer.go:318] Caches are synced for garbage collector
	I0717 23:26:09.978440       1 shared_informer.go:318] Caches are synced for garbage collector
	I0717 23:26:09.978463       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	
	* 
	* ==> kube-controller-manager [d742b3e72f9a] <==
	* I0717 23:24:50.016551       1 shared_informer.go:318] Caches are synced for expand
	I0717 23:24:50.017628       1 shared_informer.go:318] Caches are synced for deployment
	I0717 23:24:50.023667       1 shared_informer.go:318] Caches are synced for TTL after finished
	I0717 23:24:50.044424       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I0717 23:24:50.097549       1 shared_informer.go:318] Caches are synced for crt configmap
	I0717 23:24:50.103420       1 shared_informer.go:318] Caches are synced for endpoint
	I0717 23:24:50.130251       1 shared_informer.go:318] Caches are synced for ReplicationController
	I0717 23:24:50.135141       1 shared_informer.go:318] Caches are synced for bootstrap_signer
	I0717 23:24:50.172633       1 shared_informer.go:318] Caches are synced for resource quota
	I0717 23:24:50.218810       1 shared_informer.go:318] Caches are synced for disruption
	I0717 23:24:50.220262       1 shared_informer.go:318] Caches are synced for resource quota
	I0717 23:24:50.329965       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-vgrgn"
	I0717 23:24:50.336027       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-2qwmv"
	I0717 23:24:50.536007       1 shared_informer.go:318] Caches are synced for garbage collector
	I0717 23:24:50.587722       1 shared_informer.go:318] Caches are synced for garbage collector
	I0717 23:24:50.587820       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0717 23:24:50.771807       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5d78c9869d to 2"
	I0717 23:24:50.897141       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5d78c9869d to 1 from 2"
	I0717 23:24:51.024597       1 event.go:307] "Event occurred" object="kube-system/coredns-5d78c9869d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5d78c9869d-6l7gl"
	I0717 23:24:51.141891       1 event.go:307] "Event occurred" object="kube-system/coredns-5d78c9869d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5d78c9869d-78dd9"
	I0717 23:24:51.176107       1 event.go:307] "Event occurred" object="kube-system/coredns-5d78c9869d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5d78c9869d-6l7gl"
	I0717 23:24:53.759697       1 event.go:307] "Event occurred" object="kube-system/metrics-server" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-74d5c6b9c to 1"
	I0717 23:24:53.765252       1 event.go:307] "Event occurred" object="kube-system/metrics-server-74d5c6b9c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"metrics-server-74d5c6b9c-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found"
	E0717 23:24:53.770121       1 replica_set.go:544] sync "kube-system/metrics-server-74d5c6b9c" failed with pods "metrics-server-74d5c6b9c-" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount "metrics-server" not found
	I0717 23:24:53.774729       1 event.go:307] "Event occurred" object="kube-system/metrics-server-74d5c6b9c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-74d5c6b9c-v6xx7"
	
	* 
	* ==> kube-proxy [290c651ad910] <==
	* I0717 23:24:51.939075       1 node.go:141] Successfully retrieved node IP: 192.168.67.2
	I0717 23:24:51.939160       1 server_others.go:110] "Detected node IP" address="192.168.67.2"
	I0717 23:24:51.939185       1 server_others.go:554] "Using iptables proxy"
	I0717 23:24:51.968165       1 server_others.go:192] "Using iptables Proxier"
	I0717 23:24:51.968252       1 server_others.go:199] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0717 23:24:51.968260       1 server_others.go:200] "Creating dualStackProxier for iptables"
	I0717 23:24:51.968276       1 server_others.go:484] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, defaulting to no-op detect-local for IPv6"
	I0717 23:24:51.968298       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0717 23:24:51.968879       1 server.go:658] "Version info" version="v1.27.3"
	I0717 23:24:51.968946       1 server.go:660] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 23:24:51.971242       1 config.go:188] "Starting service config controller"
	I0717 23:24:51.972640       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0717 23:24:51.973381       1 config.go:315] "Starting node config controller"
	I0717 23:24:51.973415       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0717 23:24:51.973473       1 config.go:97] "Starting endpoint slice config controller"
	I0717 23:24:51.973499       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0717 23:24:52.074361       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0717 23:24:52.074404       1 shared_informer.go:318] Caches are synced for service config
	I0717 23:24:52.074490       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-proxy [e0b6a3ada1e1] <==
	* I0717 23:25:30.547806       1 node.go:141] Successfully retrieved node IP: 192.168.67.2
	I0717 23:25:30.547864       1 server_others.go:110] "Detected node IP" address="192.168.67.2"
	I0717 23:25:30.547886       1 server_others.go:554] "Using iptables proxy"
	I0717 23:25:30.650654       1 server_others.go:192] "Using iptables Proxier"
	I0717 23:25:30.650675       1 server_others.go:199] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0717 23:25:30.650684       1 server_others.go:200] "Creating dualStackProxier for iptables"
	I0717 23:25:30.650695       1 server_others.go:484] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, defaulting to no-op detect-local for IPv6"
	I0717 23:25:30.650718       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0717 23:25:30.651324       1 server.go:658] "Version info" version="v1.27.3"
	I0717 23:25:30.651332       1 server.go:660] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 23:25:30.656340       1 config.go:315] "Starting node config controller"
	I0717 23:25:30.656357       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0717 23:25:30.656920       1 config.go:97] "Starting endpoint slice config controller"
	I0717 23:25:30.656927       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0717 23:25:30.657096       1 config.go:188] "Starting service config controller"
	I0717 23:25:30.657110       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0717 23:25:30.757099       1 shared_informer.go:318] Caches are synced for node config
	I0717 23:25:30.757158       1 shared_informer.go:318] Caches are synced for service config
	I0717 23:25:30.757277       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [03a597dde7e8] <==
	* W0717 23:24:35.154960       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0717 23:24:35.155178       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0717 23:24:35.155333       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0717 23:24:35.155624       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0717 23:24:35.155454       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0717 23:24:35.155849       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0717 23:24:35.155460       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0717 23:24:35.155860       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0717 23:24:35.156825       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0717 23:24:35.156903       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0717 23:24:36.156418       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0717 23:24:36.156467       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0717 23:24:36.181567       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0717 23:24:36.181628       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0717 23:24:36.237032       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0717 23:24:36.237183       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0717 23:24:36.252299       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0717 23:24:36.252357       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0717 23:24:36.382176       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0717 23:24:36.382286       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0717 23:24:36.397445       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0717 23:24:36.397505       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0717 23:24:36.751460       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0717 23:24:54.556680       1 scheduling_queue.go:1135] "Error while retrieving next pod from scheduling queue" err="scheduling queue is closed"
	E0717 23:24:54.556779       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kube-scheduler [200d42a5c64f] <==
	* W0717 23:25:26.745896       1 feature_gate.go:241] Setting GA feature gate ServerSideApply=true. It will be removed in a future release.
	I0717 23:25:27.251169       1 serving.go:348] Generated self-signed cert in-memory
	W0717 23:25:28.992796       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0717 23:25:28.992880       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0717 23:25:28.992889       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0717 23:25:28.992893       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0717 23:25:29.039601       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.27.3"
	I0717 23:25:29.039890       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0717 23:25:29.041829       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0717 23:25:29.042141       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0717 23:25:29.042277       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0717 23:25:29.042401       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0717 23:25:29.142796       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Jul 17 23:26:10 newest-cni-958000 kubelet[3686]: I0717 23:26:10.842882    3686 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpx8c\" (UniqueName: \"kubernetes.io/projected/33d990c3-b0c6-4bd3-a8c9-a97793a4d90a-kube-api-access-cpx8c\") pod \"kube-proxy-vgrgn\" (UID: \"33d990c3-b0c6-4bd3-a8c9-a97793a4d90a\") " pod="kube-system/kube-proxy-vgrgn"
	Jul 17 23:26:10 newest-cni-958000 kubelet[3686]: I0717 23:26:10.842932    3686 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/ef5de39d-c3b1-4c33-a780-1c8b7f590356-cni-cfg\") pod \"kindnet-2qwmv\" (UID: \"ef5de39d-c3b1-4c33-a780-1c8b7f590356\") " pod="kube-system/kindnet-2qwmv"
	Jul 17 23:26:10 newest-cni-958000 kubelet[3686]: I0717 23:26:10.843000    3686 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ef5de39d-c3b1-4c33-a780-1c8b7f590356-lib-modules\") pod \"kindnet-2qwmv\" (UID: \"ef5de39d-c3b1-4c33-a780-1c8b7f590356\") " pod="kube-system/kindnet-2qwmv"
	Jul 17 23:26:10 newest-cni-958000 kubelet[3686]: I0717 23:26:10.843058    3686 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kr58q\" (UniqueName: \"kubernetes.io/projected/de57e5a7-c7e0-4452-85ff-1a3b1d22f072-kube-api-access-kr58q\") pod \"coredns-5d78c9869d-78dd9\" (UID: \"de57e5a7-c7e0-4452-85ff-1a3b1d22f072\") " pod="kube-system/coredns-5d78c9869d-78dd9"
	Jul 17 23:26:10 newest-cni-958000 kubelet[3686]: I0717 23:26:10.843125    3686 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxn6p\" (UniqueName: \"kubernetes.io/projected/758088d8-d032-45f2-8373-0d46b877596f-kube-api-access-kxn6p\") pod \"metrics-server-74d5c6b9c-v6xx7\" (UID: \"758088d8-d032-45f2-8373-0d46b877596f\") " pod="kube-system/metrics-server-74d5c6b9c-v6xx7"
	Jul 17 23:26:10 newest-cni-958000 kubelet[3686]: I0717 23:26:10.843170    3686 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/33d990c3-b0c6-4bd3-a8c9-a97793a4d90a-xtables-lock\") pod \"kube-proxy-vgrgn\" (UID: \"33d990c3-b0c6-4bd3-a8c9-a97793a4d90a\") " pod="kube-system/kube-proxy-vgrgn"
	Jul 17 23:26:10 newest-cni-958000 kubelet[3686]: I0717 23:26:10.843200    3686 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2tl6\" (UniqueName: \"kubernetes.io/projected/807d16b2-39e3-48df-8d35-1bc6defd1534-kube-api-access-l2tl6\") pod \"dashboard-metrics-scraper-59c665bc77-z5fjd\" (UID: \"807d16b2-39e3-48df-8d35-1bc6defd1534\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-59c665bc77-z5fjd"
	Jul 17 23:26:10 newest-cni-958000 kubelet[3686]: I0717 23:26:10.843222    3686 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/33d990c3-b0c6-4bd3-a8c9-a97793a4d90a-kube-proxy\") pod \"kube-proxy-vgrgn\" (UID: \"33d990c3-b0c6-4bd3-a8c9-a97793a4d90a\") " pod="kube-system/kube-proxy-vgrgn"
	Jul 17 23:26:10 newest-cni-958000 kubelet[3686]: I0717 23:26:10.843258    3686 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ef5de39d-c3b1-4c33-a780-1c8b7f590356-xtables-lock\") pod \"kindnet-2qwmv\" (UID: \"ef5de39d-c3b1-4c33-a780-1c8b7f590356\") " pod="kube-system/kindnet-2qwmv"
	Jul 17 23:26:10 newest-cni-958000 kubelet[3686]: I0717 23:26:10.843315    3686 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/807d16b2-39e3-48df-8d35-1bc6defd1534-tmp-volume\") pod \"dashboard-metrics-scraper-59c665bc77-z5fjd\" (UID: \"807d16b2-39e3-48df-8d35-1bc6defd1534\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-59c665bc77-z5fjd"
	Jul 17 23:26:10 newest-cni-958000 kubelet[3686]: I0717 23:26:10.843426    3686 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9svj\" (UniqueName: \"kubernetes.io/projected/5ae875a2-6788-4760-929d-65fc7520e407-kube-api-access-j9svj\") pod \"kubernetes-dashboard-5c5cfc8747-b8qmr\" (UID: \"5ae875a2-6788-4760-929d-65fc7520e407\") " pod="kubernetes-dashboard/kubernetes-dashboard-5c5cfc8747-b8qmr"
	Jul 17 23:26:10 newest-cni-958000 kubelet[3686]: I0717 23:26:10.843487    3686 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/33d990c3-b0c6-4bd3-a8c9-a97793a4d90a-lib-modules\") pod \"kube-proxy-vgrgn\" (UID: \"33d990c3-b0c6-4bd3-a8c9-a97793a4d90a\") " pod="kube-system/kube-proxy-vgrgn"
	Jul 17 23:26:10 newest-cni-958000 kubelet[3686]: I0717 23:26:10.843599    3686 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/de57e5a7-c7e0-4452-85ff-1a3b1d22f072-config-volume\") pod \"coredns-5d78c9869d-78dd9\" (UID: \"de57e5a7-c7e0-4452-85ff-1a3b1d22f072\") " pod="kube-system/coredns-5d78c9869d-78dd9"
	Jul 17 23:26:10 newest-cni-958000 kubelet[3686]: I0717 23:26:10.843625    3686 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/08822a3c-72fd-4c06-abfd-98dcb808d89c-tmp\") pod \"storage-provisioner\" (UID: \"08822a3c-72fd-4c06-abfd-98dcb808d89c\") " pod="kube-system/storage-provisioner"
	Jul 17 23:26:10 newest-cni-958000 kubelet[3686]: I0717 23:26:10.843644    3686 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/758088d8-d032-45f2-8373-0d46b877596f-tmp-dir\") pod \"metrics-server-74d5c6b9c-v6xx7\" (UID: \"758088d8-d032-45f2-8373-0d46b877596f\") " pod="kube-system/metrics-server-74d5c6b9c-v6xx7"
	Jul 17 23:26:10 newest-cni-958000 kubelet[3686]: I0717 23:26:10.843661    3686 reconciler.go:41] "Reconciler: start to sync state"
	Jul 17 23:26:11 newest-cni-958000 kubelet[3686]: I0717 23:26:11.062038    3686 scope.go:115] "RemoveContainer" containerID="2d0fd704caeb3026651e755beddb6a05f2d3a2fad2e8b9588b43745c0c25d924"
	Jul 17 23:26:11 newest-cni-958000 kubelet[3686]: I0717 23:26:11.762675    3686 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="910479620edf8dabf7a625d485714683f2ea088239c702e2f9966eff21a479a7"
	Jul 17 23:26:11 newest-cni-958000 kubelet[3686]: I0717 23:26:11.958406    3686 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="86e1c46ff1c55b227d36ce3eb60a8bdb0e4260aa64b155545c25759bdeabee70"
	Jul 17 23:26:11 newest-cni-958000 kubelet[3686]: I0717 23:26:11.968273    3686 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="45d6dca7e44a993a0acc113e844ea29383a5a51be185cae833102d5aaab45237"
	Jul 17 23:26:11 newest-cni-958000 kubelet[3686]: E0717 23:26:11.977251    3686 kubelet.go:1856] "Failed creating a mirror pod for" err="pods \"kube-scheduler-newest-cni-958000\" already exists" pod="kube-system/kube-scheduler-newest-cni-958000"
	Jul 17 23:26:11 newest-cni-958000 kubelet[3686]: E0717 23:26:11.977974    3686 kubelet.go:1856] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-newest-cni-958000\" already exists" pod="kube-system/kube-controller-manager-newest-cni-958000"
	Jul 17 23:26:11 newest-cni-958000 kubelet[3686]: E0717 23:26:11.978318    3686 kubelet.go:1856] "Failed creating a mirror pod for" err="pods \"kube-apiserver-newest-cni-958000\" already exists" pod="kube-system/kube-apiserver-newest-cni-958000"
	Jul 17 23:26:11 newest-cni-958000 kubelet[3686]: E0717 23:26:11.978379    3686 kubelet.go:1856] "Failed creating a mirror pod for" err="pods \"etcd-newest-cni-958000\" already exists" pod="kube-system/etcd-newest-cni-958000"
	Jul 17 23:26:12 newest-cni-958000 kubelet[3686]: I0717 23:26:12.988796    3686 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e56a04d8c11f13393d19806ca37bda316d020a73e9344f37eafb77cad81b7c76"
	
	* 
	* ==> storage-provisioner [2d0fd704caeb] <==
	* I0717 23:25:30.549065       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0717 23:26:09.188264       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
	
	* 
	* ==> storage-provisioner [43a70c51d79c] <==
	* I0717 23:26:11.341536       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0717 23:26:11.358632       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0717 23:26:11.358688       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-958000 -n newest-cni-958000
helpers_test.go:261: (dbg) Run:  kubectl --context newest-cni-958000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-74d5c6b9c-v6xx7 dashboard-metrics-scraper-59c665bc77-z5fjd kubernetes-dashboard-5c5cfc8747-b8qmr
helpers_test.go:274: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context newest-cni-958000 describe pod metrics-server-74d5c6b9c-v6xx7 dashboard-metrics-scraper-59c665bc77-z5fjd kubernetes-dashboard-5c5cfc8747-b8qmr
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context newest-cni-958000 describe pod metrics-server-74d5c6b9c-v6xx7 dashboard-metrics-scraper-59c665bc77-z5fjd kubernetes-dashboard-5c5cfc8747-b8qmr: exit status 1 (100.771359ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-74d5c6b9c-v6xx7" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-59c665bc77-z5fjd" not found
	Error from server (NotFound): pods "kubernetes-dashboard-5c5cfc8747-b8qmr" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context newest-cni-958000 describe pod metrics-server-74d5c6b9c-v6xx7 dashboard-metrics-scraper-59c665bc77-z5fjd kubernetes-dashboard-5c5cfc8747-b8qmr: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (45.80s)

                                                
                                    

Test pass (281/317)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 10.99
4 TestDownloadOnly/v1.16.0/preload-exists 0
7 TestDownloadOnly/v1.16.0/kubectl 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.28
10 TestDownloadOnly/v1.27.3/json-events 11.74
11 TestDownloadOnly/v1.27.3/preload-exists 0
14 TestDownloadOnly/v1.27.3/kubectl 0
15 TestDownloadOnly/v1.27.3/LogsDuration 0.28
16 TestDownloadOnly/DeleteAll 0.62
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.36
18 TestDownloadOnlyKic 1.93
19 TestBinaryMirror 1.63
20 TestOffline 58
22 TestAddons/Setup 210.73
26 TestAddons/parallel/InspektorGadget 11.2
27 TestAddons/parallel/MetricsServer 5.8
28 TestAddons/parallel/HelmTiller 11.18
30 TestAddons/parallel/CSI 67.23
31 TestAddons/parallel/Headlamp 14.81
32 TestAddons/parallel/CloudSpanner 5.69
35 TestAddons/serial/GCPAuth/Namespaces 0.16
36 TestAddons/StoppedEnableDisable 11.65
37 TestCertOptions 26.43
38 TestCertExpiration 233.01
39 TestDockerFlags 27.29
40 TestForceSystemdFlag 27.63
41 TestForceSystemdEnv 27.77
44 TestHyperKitDriverInstallOrUpdate 7.44
47 TestErrorSpam/setup 22.38
48 TestErrorSpam/start 1.9
49 TestErrorSpam/status 1.18
50 TestErrorSpam/pause 1.74
51 TestErrorSpam/unpause 1.73
52 TestErrorSpam/stop 2.75
55 TestFunctional/serial/CopySyncFile 0
56 TestFunctional/serial/StartWithProxy 49.58
57 TestFunctional/serial/AuditLog 0
58 TestFunctional/serial/SoftStart 37.36
59 TestFunctional/serial/KubeContext 0.04
60 TestFunctional/serial/KubectlGetPods 0.07
63 TestFunctional/serial/CacheCmd/cache/add_remote 6.99
64 TestFunctional/serial/CacheCmd/cache/add_local 1.51
65 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.07
66 TestFunctional/serial/CacheCmd/cache/list 0.07
67 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.41
68 TestFunctional/serial/CacheCmd/cache/cache_reload 2.65
69 TestFunctional/serial/CacheCmd/cache/delete 0.13
70 TestFunctional/serial/MinikubeKubectlCmd 0.56
71 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.74
72 TestFunctional/serial/ExtraConfig 39.97
73 TestFunctional/serial/ComponentHealth 0.06
74 TestFunctional/serial/LogsCmd 3.34
75 TestFunctional/serial/LogsFileCmd 3.25
76 TestFunctional/serial/InvalidService 4.06
78 TestFunctional/parallel/ConfigCmd 0.44
79 TestFunctional/parallel/DashboardCmd 16.14
80 TestFunctional/parallel/DryRun 1.57
81 TestFunctional/parallel/InternationalLanguage 0.62
82 TestFunctional/parallel/StatusCmd 1.19
87 TestFunctional/parallel/AddonsCmd 0.23
88 TestFunctional/parallel/PersistentVolumeClaim 29.38
90 TestFunctional/parallel/SSHCmd 0.73
91 TestFunctional/parallel/CpCmd 1.64
92 TestFunctional/parallel/MySQL 41.62
93 TestFunctional/parallel/FileSync 0.39
94 TestFunctional/parallel/CertSync 2.44
98 TestFunctional/parallel/NodeLabels 0.05
100 TestFunctional/parallel/NonActiveRuntimeDisabled 0.37
103 TestFunctional/parallel/Version/short 0.09
104 TestFunctional/parallel/Version/components 1.1
105 TestFunctional/parallel/ImageCommands/ImageListShort 0.37
106 TestFunctional/parallel/ImageCommands/ImageListTable 0.3
107 TestFunctional/parallel/ImageCommands/ImageListJson 0.39
108 TestFunctional/parallel/ImageCommands/ImageListYaml 0.31
109 TestFunctional/parallel/ImageCommands/ImageBuild 3.68
110 TestFunctional/parallel/ImageCommands/Setup 3.15
111 TestFunctional/parallel/DockerEnv/bash 2.35
112 TestFunctional/parallel/UpdateContextCmd/no_changes 0.27
113 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.27
114 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.27
115 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.58
116 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.99
117 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 8.1
118 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.94
119 TestFunctional/parallel/ImageCommands/ImageRemove 0.86
120 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 2.8
121 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 3.05
122 TestFunctional/parallel/ServiceCmd/DeployApp 17.15
123 TestFunctional/parallel/ServiceCmd/List 0.42
124 TestFunctional/parallel/ServiceCmd/JSONOutput 0.44
125 TestFunctional/parallel/ServiceCmd/HTTPS 15
127 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.54
128 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
130 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 11.22
131 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.05
132 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
136 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
137 TestFunctional/parallel/ServiceCmd/Format 15
138 TestFunctional/parallel/ServiceCmd/URL 15
139 TestFunctional/parallel/ProfileCmd/profile_not_create 0.49
140 TestFunctional/parallel/ProfileCmd/profile_list 0.46
141 TestFunctional/parallel/ProfileCmd/profile_json_output 0.46
142 TestFunctional/parallel/MountCmd/any-port 8.95
143 TestFunctional/parallel/MountCmd/specific-port 2.29
144 TestFunctional/parallel/MountCmd/VerifyCleanup 2.82
145 TestFunctional/delete_addon-resizer_images 0.2
146 TestFunctional/delete_my-image_image 0.05
147 TestFunctional/delete_minikube_cached_images 0.05
151 TestImageBuild/serial/Setup 23.28
152 TestImageBuild/serial/NormalBuild 2.67
153 TestImageBuild/serial/BuildWithBuildArg 0.86
154 TestImageBuild/serial/BuildWithDockerIgnore 0.69
155 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.7
165 TestJSONOutput/start/Command 50.23
166 TestJSONOutput/start/Audit 0
168 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
169 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
171 TestJSONOutput/pause/Command 0.6
172 TestJSONOutput/pause/Audit 0
174 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
175 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
177 TestJSONOutput/unpause/Command 0.61
178 TestJSONOutput/unpause/Audit 0
180 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
181 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
183 TestJSONOutput/stop/Command 10.93
184 TestJSONOutput/stop/Audit 0
186 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
188 TestErrorJSONOutput 0.71
190 TestKicCustomNetwork/create_custom_network 25
191 TestKicCustomNetwork/use_default_bridge_network 24.37
192 TestKicExistingNetwork 24.36
193 TestKicCustomSubnet 24.8
194 TestKicStaticIP 24.17
195 TestMainNoArgs 0.06
196 TestMinikubeProfile 52.53
199 TestMountStart/serial/StartWithMountFirst 7.97
200 TestMountStart/serial/VerifyMountFirst 0.37
201 TestMountStart/serial/StartWithMountSecond 8.27
202 TestMountStart/serial/VerifyMountSecond 0.36
203 TestMountStart/serial/DeleteFirst 2.08
204 TestMountStart/serial/VerifyMountPostDelete 0.37
205 TestMountStart/serial/Stop 1.55
206 TestMountStart/serial/RestartStopped 9.15
207 TestMountStart/serial/VerifyMountPostStop 0.36
210 TestMultiNode/serial/FreshStart2Nodes 64.6
211 TestMultiNode/serial/DeployApp2Nodes 52.08
212 TestMultiNode/serial/PingHostFrom2Pods 0.86
213 TestMultiNode/serial/AddNode 16.31
214 TestMultiNode/serial/ProfileList 0.4
215 TestMultiNode/serial/CopyFile 13.69
216 TestMultiNode/serial/StopNode 2.92
217 TestMultiNode/serial/StartAfterStop 13.53
218 TestMultiNode/serial/RestartKeepsNodes 116.96
219 TestMultiNode/serial/DeleteNode 5.88
220 TestMultiNode/serial/StopMultiNode 21.81
221 TestMultiNode/serial/RestartMultiNode 60.67
222 TestMultiNode/serial/ValidateNameConflict 26.41
226 TestPreload 169.57
228 TestScheduledStopUnix 95.84
229 TestSkaffold 120.44
231 TestInsufficientStorage 10.81
247 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 10.96
249 TestStoppedBinaryUpgrade/Setup 1.54
251 TestStoppedBinaryUpgrade/MinikubeLogs 3.47
253 TestPause/serial/Start 50.61
254 TestPause/serial/SecondStartNoReconfiguration 41.45
255 TestPause/serial/Pause 0.7
256 TestPause/serial/VerifyStatus 0.42
257 TestPause/serial/Unpause 0.68
258 TestPause/serial/PauseAgain 0.81
259 TestPause/serial/DeletePaused 2.5
260 TestPause/serial/VerifyDeletedResources 0.52
269 TestNoKubernetes/serial/StartNoK8sWithVersion 0.42
270 TestNoKubernetes/serial/StartWithK8s 24.37
271 TestNoKubernetes/serial/StartWithStopK8s 9.74
272 TestNoKubernetes/serial/Start 8.57
273 TestNoKubernetes/serial/VerifyK8sNotRunning 0.35
274 TestNoKubernetes/serial/ProfileList 34.34
275 TestNoKubernetes/serial/Stop 1.53
276 TestNoKubernetes/serial/StartNoArgs 8.01
277 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.34
278 TestNetworkPlugins/group/auto/Start 51.13
279 TestNetworkPlugins/group/auto/KubeletFlags 0.38
280 TestNetworkPlugins/group/auto/NetCatPod 12.28
281 TestNetworkPlugins/group/auto/DNS 0.13
282 TestNetworkPlugins/group/auto/Localhost 0.12
283 TestNetworkPlugins/group/auto/HairPin 0.12
284 TestNetworkPlugins/group/flannel/Start 51.87
285 TestNetworkPlugins/group/flannel/ControllerPod 5.02
286 TestNetworkPlugins/group/flannel/KubeletFlags 0.37
287 TestNetworkPlugins/group/flannel/NetCatPod 12.27
288 TestNetworkPlugins/group/flannel/DNS 0.13
289 TestNetworkPlugins/group/flannel/Localhost 0.11
290 TestNetworkPlugins/group/flannel/HairPin 0.11
291 TestNetworkPlugins/group/enable-default-cni/Start 38.83
292 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.37
293 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.34
294 TestNetworkPlugins/group/enable-default-cni/DNS 0.15
295 TestNetworkPlugins/group/enable-default-cni/Localhost 0.11
296 TestNetworkPlugins/group/enable-default-cni/HairPin 0.11
297 TestNetworkPlugins/group/kindnet/Start 51.88
298 TestNetworkPlugins/group/bridge/Start 37.64
299 TestNetworkPlugins/group/kindnet/ControllerPod 5.02
300 TestNetworkPlugins/group/bridge/KubeletFlags 0.37
301 TestNetworkPlugins/group/bridge/NetCatPod 11.29
302 TestNetworkPlugins/group/kindnet/KubeletFlags 0.39
303 TestNetworkPlugins/group/kindnet/NetCatPod 11.28
304 TestNetworkPlugins/group/bridge/DNS 0.15
305 TestNetworkPlugins/group/bridge/Localhost 0.12
306 TestNetworkPlugins/group/bridge/HairPin 0.12
307 TestNetworkPlugins/group/kindnet/DNS 0.13
308 TestNetworkPlugins/group/kindnet/Localhost 0.12
309 TestNetworkPlugins/group/kindnet/HairPin 0.12
310 TestNetworkPlugins/group/kubenet/Start 49.33
311 TestNetworkPlugins/group/custom-flannel/Start 51.27
312 TestNetworkPlugins/group/kubenet/KubeletFlags 0.41
313 TestNetworkPlugins/group/kubenet/NetCatPod 12.31
314 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.37
315 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.28
316 TestNetworkPlugins/group/kubenet/DNS 0.16
317 TestNetworkPlugins/group/kubenet/Localhost 0.11
318 TestNetworkPlugins/group/kubenet/HairPin 0.11
319 TestNetworkPlugins/group/custom-flannel/DNS 0.14
320 TestNetworkPlugins/group/custom-flannel/Localhost 0.14
321 TestNetworkPlugins/group/custom-flannel/HairPin 0.12
322 TestNetworkPlugins/group/calico/Start 66.51
323 TestNetworkPlugins/group/false/Start 38.35
324 TestNetworkPlugins/group/false/KubeletFlags 0.41
325 TestNetworkPlugins/group/false/NetCatPod 11.3
326 TestNetworkPlugins/group/false/DNS 0.17
327 TestNetworkPlugins/group/false/Localhost 0.14
328 TestNetworkPlugins/group/false/HairPin 0.13
329 TestNetworkPlugins/group/calico/ControllerPod 5.02
330 TestNetworkPlugins/group/calico/KubeletFlags 0.43
331 TestNetworkPlugins/group/calico/NetCatPod 13.33
334 TestNetworkPlugins/group/calico/DNS 0.4
335 TestNetworkPlugins/group/calico/Localhost 0.13
336 TestNetworkPlugins/group/calico/HairPin 0.13
338 TestStartStop/group/no-preload/serial/FirstStart 68.57
339 TestStartStop/group/no-preload/serial/DeployApp 10.34
340 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.11
341 TestStartStop/group/no-preload/serial/Stop 10.9
342 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.4
343 TestStartStop/group/no-preload/serial/SecondStart 333.75
346 TestStartStop/group/old-k8s-version/serial/Stop 1.54
347 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.4
349 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 22.01
350 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
351 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.42
352 TestStartStop/group/no-preload/serial/Pause 3.17
354 TestStartStop/group/embed-certs/serial/FirstStart 51.86
355 TestStartStop/group/embed-certs/serial/DeployApp 9.34
356 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.24
357 TestStartStop/group/embed-certs/serial/Stop 10.92
358 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.42
359 TestStartStop/group/embed-certs/serial/SecondStart 334.4
361 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 21.02
362 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.09
363 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.44
364 TestStartStop/group/embed-certs/serial/Pause 3.16
366 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 51.78
367 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.35
368 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.21
369 TestStartStop/group/default-k8s-diff-port/serial/Stop 10.85
370 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.42
371 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 334.58
372 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 24.01
373 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.09
374 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.42
375 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.15
377 TestStartStop/group/newest-cni/serial/FirstStart 34.43
379 TestStartStop/group/newest-cni/serial/DeployApp 0
380 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.48
381 TestStartStop/group/newest-cni/serial/Stop 11.49
382 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.41
383 TestStartStop/group/newest-cni/serial/SecondStart 29.15
384 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
385 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
386 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.43
x
+
TestDownloadOnly/v1.16.0/json-events (10.99s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-779000 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:69: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-779000 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker : (10.992337586s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (10.99s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
--- PASS: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.28s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-779000
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-779000: exit status 85 (281.188422ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-779000 | jenkins | v1.31.0 | 17 Jul 23 15:06 PDT |          |
	|         | -p download-only-779000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/07/17 15:06:34
	Running on machine: MacOS-Agent-1
	Binary: Built with gc go1.20.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 15:06:34.970682   77326 out.go:296] Setting OutFile to fd 1 ...
	I0717 15:06:34.970938   77326 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 15:06:34.970945   77326 out.go:309] Setting ErrFile to fd 2...
	I0717 15:06:34.970949   77326 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 15:06:34.971124   77326 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/16899-76867/.minikube/bin
	W0717 15:06:34.971297   77326 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/16899-76867/.minikube/config/config.json: open /Users/jenkins/minikube-integration/16899-76867/.minikube/config/config.json: no such file or directory
	I0717 15:06:34.973008   77326 out.go:303] Setting JSON to true
	I0717 15:06:34.993048   77326 start.go:128] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":21962,"bootTime":1689609632,"procs":443,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0717 15:06:34.993142   77326 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0717 15:06:35.012916   77326 out.go:97] [download-only-779000] minikube v1.31.0 on Darwin 13.4.1
	I0717 15:06:35.013055   77326 notify.go:220] Checking for updates...
	I0717 15:06:35.035008   77326 out.go:169] MINIKUBE_LOCATION=16899
	W0717 15:06:35.013191   77326 preload.go:295] Failed to list preload files: open /Users/jenkins/minikube-integration/16899-76867/.minikube/cache/preloaded-tarball: no such file or directory
	I0717 15:06:35.079070   77326 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/16899-76867/kubeconfig
	I0717 15:06:35.101108   77326 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0717 15:06:35.122080   77326 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 15:06:35.142990   77326 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/16899-76867/.minikube
	W0717 15:06:35.186124   77326 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0717 15:06:35.186566   77326 driver.go:373] Setting default libvirt URI to qemu:///system
	I0717 15:06:35.242520   77326 docker.go:121] docker version: linux-24.0.2:Docker Desktop 4.21.1 (114176)
	I0717 15:06:35.242633   77326 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 15:06:35.344338   77326 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:false NGoroutines:63 SystemTime:2023-07-17 22:06:35.33237143 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:5.15.49-linuxkit-pr OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSe
rverAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=builti
n name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Path
:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.6] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Command line tool for Docker Scout Vendor:Docker Inc. Version:0.16.1]] Warnings:<ni
l>}}
	I0717 15:06:35.366173   77326 out.go:97] Using the docker driver based on user configuration
	I0717 15:06:35.366220   77326 start.go:298] selected driver: docker
	I0717 15:06:35.366232   77326 start.go:880] validating driver "docker" against <nil>
	I0717 15:06:35.366456   77326 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 15:06:35.471925   77326 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:false NGoroutines:63 SystemTime:2023-07-17 22:06:35.460117465 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:5.15.49-linuxkit-pr OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexS
erverAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=built
in name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Pat
h:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.6] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Command line tool for Docker Scout Vendor:Docker Inc. Version:0.16.1]] Warnings:<n
il>}}
	I0717 15:06:35.472123   77326 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0717 15:06:35.474740   77326 start_flags.go:382] Using suggested 5895MB memory alloc based on sys=32768MB, container=5943MB
	I0717 15:06:35.474900   77326 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
	I0717 15:06:35.496292   77326 out.go:169] Using Docker Desktop driver with root privileges
	I0717 15:06:35.518503   77326 cni.go:84] Creating CNI manager for ""
	I0717 15:06:35.518543   77326 cni.go:168] CNI unnecessary in this configuration, recommending no CNI
	I0717 15:06:35.518565   77326 start_flags.go:319] config:
	{Name:download-only-779000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:5895 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-779000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket:
NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 15:06:35.540202   77326 out.go:97] Starting control plane node download-only-779000 in cluster download-only-779000
	I0717 15:06:35.540252   77326 cache.go:122] Beginning downloading kic base image for docker with docker
	I0717 15:06:35.568662   77326 out.go:97] Pulling base image ...
	I0717 15:06:35.568721   77326 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0717 15:06:35.568815   77326 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon
	I0717 15:06:35.621189   77326 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 to local cache
	I0717 15:06:35.621420   77326 image.go:63] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local cache directory
	I0717 15:06:35.621572   77326 image.go:118] Writing gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 to local cache
	I0717 15:06:35.673013   77326 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0717 15:06:35.673048   77326 cache.go:57] Caching tarball of preloaded images
	I0717 15:06:35.673372   77326 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0717 15:06:35.693824   77326 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0717 15:06:35.693878   77326 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0717 15:06:35.911435   77326 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4?checksum=md5:326f3ce331abb64565b50b8c9e791244 -> /Users/jenkins/minikube-integration/16899-76867/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0717 15:06:42.879305   77326 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0717 15:06:42.879459   77326 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/16899-76867/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0717 15:06:43.489909   77326 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0717 15:06:43.490127   77326 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/download-only-779000/config.json ...
	I0717 15:06:43.490157   77326 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/download-only-779000/config.json: {Name:mk578b9c1b7e8e8643111955c61da7755d91a544 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0717 15:06:43.492294   77326 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0717 15:06:43.492756   77326 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/amd64/kubectl.sha1 -> /Users/jenkins/minikube-integration/16899-76867/.minikube/cache/darwin/amd64/v1.16.0/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-779000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.28s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/json-events (11.74s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-779000 --force --alsologtostderr --kubernetes-version=v1.27.3 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:69: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-779000 --force --alsologtostderr --kubernetes-version=v1.27.3 --container-runtime=docker --driver=docker : (11.734782051s)
--- PASS: TestDownloadOnly/v1.27.3/json-events (11.74s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/preload-exists
--- PASS: TestDownloadOnly/v1.27.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/kubectl
--- PASS: TestDownloadOnly/v1.27.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/LogsDuration (0.28s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-779000
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-779000: exit status 85 (280.370129ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-779000 | jenkins | v1.31.0 | 17 Jul 23 15:06 PDT |          |
	|         | -p download-only-779000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-779000 | jenkins | v1.31.0 | 17 Jul 23 15:06 PDT |          |
	|         | -p download-only-779000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.27.3   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/07/17 15:06:46
	Running on machine: MacOS-Agent-1
	Binary: Built with gc go1.20.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0717 15:06:46.249148   77357 out.go:296] Setting OutFile to fd 1 ...
	I0717 15:06:46.249346   77357 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 15:06:46.249351   77357 out.go:309] Setting ErrFile to fd 2...
	I0717 15:06:46.249355   77357 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 15:06:46.249537   77357 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/16899-76867/.minikube/bin
	W0717 15:06:46.249638   77357 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/16899-76867/.minikube/config/config.json: open /Users/jenkins/minikube-integration/16899-76867/.minikube/config/config.json: no such file or directory
	I0717 15:06:46.250882   77357 out.go:303] Setting JSON to true
	I0717 15:06:46.272920   77357 start.go:128] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":21974,"bootTime":1689609632,"procs":441,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0717 15:06:46.273007   77357 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0717 15:06:46.294790   77357 out.go:97] [download-only-779000] minikube v1.31.0 on Darwin 13.4.1
	I0717 15:06:46.294974   77357 notify.go:220] Checking for updates...
	I0717 15:06:46.316746   77357 out.go:169] MINIKUBE_LOCATION=16899
	I0717 15:06:46.337730   77357 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/16899-76867/kubeconfig
	I0717 15:06:46.359052   77357 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0717 15:06:46.381053   77357 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 15:06:46.402744   77357 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/16899-76867/.minikube
	W0717 15:06:46.445770   77357 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0717 15:06:46.446467   77357 config.go:182] Loaded profile config "download-only-779000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	W0717 15:06:46.446549   77357 start.go:788] api.Load failed for download-only-779000: filestore "download-only-779000": Docker machine "download-only-779000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0717 15:06:46.446711   77357 driver.go:373] Setting default libvirt URI to qemu:///system
	W0717 15:06:46.446748   77357 start.go:788] api.Load failed for download-only-779000: filestore "download-only-779000": Docker machine "download-only-779000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0717 15:06:46.501752   77357 docker.go:121] docker version: linux-24.0.2:Docker Desktop 4.21.1 (114176)
	I0717 15:06:46.501885   77357 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 15:06:46.602433   77357 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:false NGoroutines:63 SystemTime:2023-07-17 22:06:46.590761441 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:5.15.49-linuxkit-pr OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexS
erverAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=built
in name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Pat
h:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.6] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Command line tool for Docker Scout Vendor:Docker Inc. Version:0.16.1]] Warnings:<n
il>}}
	I0717 15:06:46.623498   77357 out.go:97] Using the docker driver based on existing profile
	I0717 15:06:46.623538   77357 start.go:298] selected driver: docker
	I0717 15:06:46.623549   77357 start.go:880] validating driver "docker" against &{Name:download-only-779000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:5895 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-779000 Namespace:default APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSH
AuthSock: SSHAgentPID:0}
	I0717 15:06:46.623852   77357 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 15:06:46.727091   77357 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:false NGoroutines:63 SystemTime:2023-07-17 22:06:46.715085843 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:5.15.49-linuxkit-pr OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexS
erverAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=built
in name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Pat
h:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.6] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Command line tool for Docker Scout Vendor:Docker Inc. Version:0.16.1]] Warnings:<n
il>}}
	I0717 15:06:46.729976   77357 cni.go:84] Creating CNI manager for ""
	I0717 15:06:46.729996   77357 cni.go:149] "docker" driver + "docker" runtime found, recommending kindnet
	I0717 15:06:46.730008   77357 start_flags.go:319] config:
	{Name:download-only-779000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:5895 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:download-only-779000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket:
NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 15:06:46.751393   77357 out.go:97] Starting control plane node download-only-779000 in cluster download-only-779000
	I0717 15:06:46.751504   77357 cache.go:122] Beginning downloading kic base image for docker with docker
	I0717 15:06:46.772506   77357 out.go:97] Pulling base image ...
	I0717 15:06:46.772624   77357 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0717 15:06:46.772647   77357 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local docker daemon
	I0717 15:06:46.822927   77357 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 to local cache
	I0717 15:06:46.823155   77357 image.go:63] Checking for gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local cache directory
	I0717 15:06:46.823187   77357 image.go:66] Found gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 in local cache directory, skipping pull
	I0717 15:06:46.823191   77357 image.go:105] gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 exists in cache, skipping pull
	I0717 15:06:46.823199   77357 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 as a tarball
	I0717 15:06:46.862631   77357 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.27.3/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-amd64.tar.lz4
	I0717 15:06:46.862660   77357 cache.go:57] Caching tarball of preloaded images
	I0717 15:06:46.862991   77357 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0717 15:06:46.884342   77357 out.go:97] Downloading Kubernetes v1.27.3 preload ...
	I0717 15:06:46.884436   77357 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.27.3-docker-overlay2-amd64.tar.lz4 ...
	I0717 15:06:47.089099   77357 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.27.3/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-amd64.tar.lz4?checksum=md5:90b30902fa911e3bcfdde5b24cedf0b2 -> /Users/jenkins/minikube-integration/16899-76867/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-amd64.tar.lz4
	I0717 15:06:54.934295   77357 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.27.3-docker-overlay2-amd64.tar.lz4 ...
	I0717 15:06:54.934488   77357 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/16899-76867/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-amd64.tar.lz4 ...
	I0717 15:06:55.545701   77357 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0717 15:06:55.545825   77357 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/download-only-779000/config.json ...
	I0717 15:06:55.546213   77357 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0717 15:06:55.546516   77357 download.go:107] Downloading: https://dl.k8s.io/release/v1.27.3/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.27.3/bin/darwin/amd64/kubectl.sha256 -> /Users/jenkins/minikube-integration/16899-76867/.minikube/cache/darwin/amd64/v1.27.3/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-779000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.27.3/LogsDuration (0.28s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.62s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:187: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.62s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.36s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:199: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-779000
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.36s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.93s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:222: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p download-docker-541000 --alsologtostderr --driver=docker 
helpers_test.go:175: Cleaning up "download-docker-541000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-docker-541000
--- PASS: TestDownloadOnlyKic (1.93s)

                                                
                                    
x
+
TestBinaryMirror (1.63s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p binary-mirror-221000 --alsologtostderr --binary-mirror http://127.0.0.1:52839 --driver=docker 
aaa_download_only_test.go:304: (dbg) Done: out/minikube-darwin-amd64 start --download-only -p binary-mirror-221000 --alsologtostderr --binary-mirror http://127.0.0.1:52839 --driver=docker : (1.021525536s)
helpers_test.go:175: Cleaning up "binary-mirror-221000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p binary-mirror-221000
--- PASS: TestBinaryMirror (1.63s)

                                                
                                    
x
+
TestOffline (58s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 start -p offline-docker-111000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker 
aab_offline_test.go:55: (dbg) Done: out/minikube-darwin-amd64 start -p offline-docker-111000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker : (55.321703306s)
helpers_test.go:175: Cleaning up "offline-docker-111000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p offline-docker-111000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p offline-docker-111000: (2.681889721s)
--- PASS: TestOffline (58.00s)

                                                
                                    
x
+
TestAddons/Setup (210.73s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:88: (dbg) Run:  out/minikube-darwin-amd64 start -p addons-230000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=docker  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:88: (dbg) Done: out/minikube-darwin-amd64 start -p addons-230000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=docker  --addons=ingress --addons=ingress-dns --addons=helm-tiller: (3m30.727223567s)
--- PASS: TestAddons/Setup (210.73s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.2s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:814: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-2jrv4" [cbeddf53-a265-418e-9d5f-4a03c9f76c9c] Running
addons_test.go:814: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.010431764s
addons_test.go:817: (dbg) Run:  out/minikube-darwin-amd64 addons disable inspektor-gadget -p addons-230000
addons_test.go:817: (dbg) Done: out/minikube-darwin-amd64 addons disable inspektor-gadget -p addons-230000: (6.184867118s)
--- PASS: TestAddons/parallel/InspektorGadget (11.20s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.8s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:383: metrics-server stabilized in 2.947529ms
addons_test.go:385: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-844d8db974-gzctg" [8b73b753-96a6-4b81-82de-f811385537b6] Running
addons_test.go:385: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.009938412s
addons_test.go:391: (dbg) Run:  kubectl --context addons-230000 top pods -n kube-system
addons_test.go:408: (dbg) Run:  out/minikube-darwin-amd64 -p addons-230000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.80s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (11.18s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:432: tiller-deploy stabilized in 3.656108ms
addons_test.go:434: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-6847666dc-jfgc6" [9d11e4ac-be10-4a54-a224-711ec7a9c02a] Running
addons_test.go:434: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.009289614s
addons_test.go:449: (dbg) Run:  kubectl --context addons-230000 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:449: (dbg) Done: kubectl --context addons-230000 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (5.422809272s)
addons_test.go:466: (dbg) Run:  out/minikube-darwin-amd64 -p addons-230000 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (11.18s)

                                                
                                    
x
+
TestAddons/parallel/CSI (67.23s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:537: csi-hostpath-driver pods stabilized in 5.122688ms
addons_test.go:540: (dbg) Run:  kubectl --context addons-230000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:545: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-230000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-230000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-230000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-230000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-230000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-230000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-230000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-230000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-230000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-230000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-230000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-230000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-230000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-230000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-230000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-230000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-230000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-230000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-230000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-230000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-230000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:550: (dbg) Run:  kubectl --context addons-230000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:555: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [bf529429-4b73-4bdd-8031-01d537e67977] Pending
helpers_test.go:344: "task-pv-pod" [bf529429-4b73-4bdd-8031-01d537e67977] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [bf529429-4b73-4bdd-8031-01d537e67977] Running
addons_test.go:555: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 16.009557102s
addons_test.go:560: (dbg) Run:  kubectl --context addons-230000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:565: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-230000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-230000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:570: (dbg) Run:  kubectl --context addons-230000 delete pod task-pv-pod
addons_test.go:576: (dbg) Run:  kubectl --context addons-230000 delete pvc hpvc
addons_test.go:582: (dbg) Run:  kubectl --context addons-230000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:587: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-230000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-230000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-230000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-230000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-230000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-230000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-230000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-230000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-230000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-230000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-230000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-230000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-230000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:592: (dbg) Run:  kubectl --context addons-230000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:597: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [73573ad6-372f-4cff-9236-1caf9e6795c0] Pending
helpers_test.go:344: "task-pv-pod-restore" [73573ad6-372f-4cff-9236-1caf9e6795c0] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [73573ad6-372f-4cff-9236-1caf9e6795c0] Running
addons_test.go:597: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.009339442s
addons_test.go:602: (dbg) Run:  kubectl --context addons-230000 delete pod task-pv-pod-restore
addons_test.go:606: (dbg) Run:  kubectl --context addons-230000 delete pvc hpvc-restore
addons_test.go:610: (dbg) Run:  kubectl --context addons-230000 delete volumesnapshot new-snapshot-demo
addons_test.go:614: (dbg) Run:  out/minikube-darwin-amd64 -p addons-230000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:614: (dbg) Done: out/minikube-darwin-amd64 -p addons-230000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.975195563s)
addons_test.go:618: (dbg) Run:  out/minikube-darwin-amd64 -p addons-230000 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (67.23s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (14.81s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:800: (dbg) Run:  out/minikube-darwin-amd64 addons enable headlamp -p addons-230000 --alsologtostderr -v=1
addons_test.go:800: (dbg) Done: out/minikube-darwin-amd64 addons enable headlamp -p addons-230000 --alsologtostderr -v=1: (1.801741259s)
addons_test.go:805: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-66f6498c69-mv7lt" [d8490a45-8653-4d25-8c08-774c13be0393] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-66f6498c69-mv7lt" [d8490a45-8653-4d25-8c08-774c13be0393] Running
addons_test.go:805: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 13.009800583s
--- PASS: TestAddons/parallel/Headlamp (14.81s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.69s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:833: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-88647b4cb-zhxnw" [0cdafc09-e0ad-4bb3-9c92-dd4f4aea740f] Running
addons_test.go:833: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.01173377s
addons_test.go:836: (dbg) Run:  out/minikube-darwin-amd64 addons disable cloud-spanner -p addons-230000
--- PASS: TestAddons/parallel/CloudSpanner (5.69s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.16s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:626: (dbg) Run:  kubectl --context addons-230000 create ns new-namespace
addons_test.go:640: (dbg) Run:  kubectl --context addons-230000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.16s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (11.65s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:148: (dbg) Run:  out/minikube-darwin-amd64 stop -p addons-230000
addons_test.go:148: (dbg) Done: out/minikube-darwin-amd64 stop -p addons-230000: (10.969468352s)
addons_test.go:152: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-230000
addons_test.go:156: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-230000
addons_test.go:161: (dbg) Run:  out/minikube-darwin-amd64 addons disable gvisor -p addons-230000
--- PASS: TestAddons/StoppedEnableDisable (11.65s)

                                                
                                    
x
+
TestCertOptions (26.43s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-options-580000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --apiserver-name=localhost
E0717 15:45:33.890451   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/addons-230000/client.crt: no such file or directory
cert_options_test.go:49: (dbg) Done: out/minikube-darwin-amd64 start -p cert-options-580000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --apiserver-name=localhost: (23.116367577s)
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-amd64 -p cert-options-580000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-amd64 ssh -p cert-options-580000 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-580000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-options-580000
E0717 15:45:45.761619   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/functional-554000/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-options-580000: (2.508233991s)
--- PASS: TestCertOptions (26.43s)

                                                
                                    
x
+
TestCertExpiration (233.01s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-996000 --memory=2048 --cert-expiration=3m --driver=docker 
cert_options_test.go:123: (dbg) Done: out/minikube-darwin-amd64 start -p cert-expiration-996000 --memory=2048 --cert-expiration=3m --driver=docker : (24.082062108s)
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-996000 --memory=2048 --cert-expiration=8760h --driver=docker 
E0717 15:48:46.159833   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/skaffold-258000/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Done: out/minikube-darwin-amd64 start -p cert-expiration-996000 --memory=2048 --cert-expiration=8760h --driver=docker : (26.364806095s)
helpers_test.go:175: Cleaning up "cert-expiration-996000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-expiration-996000
E0717 15:49:06.641444   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/skaffold-258000/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-expiration-996000: (2.559187223s)
--- PASS: TestCertExpiration (233.01s)

                                                
                                    
x
+
TestDockerFlags (27.29s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-flags-595000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker 
docker_test.go:51: (dbg) Done: out/minikube-darwin-amd64 start -p docker-flags-595000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker : (23.214459386s)
docker_test.go:56: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-595000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-595000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-595000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-flags-595000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-flags-595000: (3.029632131s)
--- PASS: TestDockerFlags (27.29s)

                                                
                                    
x
+
TestForceSystemdFlag (27.63s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-flag-888000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker 
docker_test.go:91: (dbg) Done: out/minikube-darwin-amd64 start -p force-systemd-flag-888000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker : (24.611902784s)
docker_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-flag-888000 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-888000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-flag-888000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-flag-888000: (2.585904786s)
--- PASS: TestForceSystemdFlag (27.63s)

                                                
                                    
x
+
TestForceSystemdEnv (27.77s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-env-647000 --memory=2048 --alsologtostderr -v=5 --driver=docker 
docker_test.go:155: (dbg) Done: out/minikube-darwin-amd64 start -p force-systemd-env-647000 --memory=2048 --alsologtostderr -v=5 --driver=docker : (24.89902273s)
docker_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-env-647000 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-647000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-env-647000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-env-647000: (2.404908732s)
--- PASS: TestForceSystemdEnv (27.77s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (7.44s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (7.44s)

                                                
                                    
x
+
TestErrorSpam/setup (22.38s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -p nospam-420000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-420000 --driver=docker 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -p nospam-420000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-420000 --driver=docker : (22.376428838s)
--- PASS: TestErrorSpam/setup (22.38s)

                                                
                                    
x
+
TestErrorSpam/start (1.9s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-420000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-420000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-420000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-420000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-420000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-420000 start --dry-run
--- PASS: TestErrorSpam/start (1.90s)

                                                
                                    
x
+
TestErrorSpam/status (1.18s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-420000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-420000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-420000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-420000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-420000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-420000 status
--- PASS: TestErrorSpam/status (1.18s)

                                                
                                    
x
+
TestErrorSpam/pause (1.74s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-420000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-420000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-420000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-420000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-420000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-420000 pause
--- PASS: TestErrorSpam/pause (1.74s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.73s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-420000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-420000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-420000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-420000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-420000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-420000 unpause
--- PASS: TestErrorSpam/unpause (1.73s)

                                                
                                    
x
+
TestErrorSpam/stop (2.75s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-420000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-420000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-amd64 -p nospam-420000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-420000 stop: (2.147960404s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-420000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-420000 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-420000 --log_dir /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/nospam-420000 stop
--- PASS: TestErrorSpam/stop (2.75s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /Users/jenkins/minikube-integration/16899-76867/.minikube/files/etc/test/nested/copy/77324/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (49.58s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-554000 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker 
functional_test.go:2230: (dbg) Done: out/minikube-darwin-amd64 start -p functional-554000 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker : (49.582578509s)
--- PASS: TestFunctional/serial/StartWithProxy (49.58s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (37.36s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-554000 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-darwin-amd64 start -p functional-554000 --alsologtostderr -v=8: (37.358002105s)
functional_test.go:659: soft start took 37.358640071s for "functional-554000" cluster.
--- PASS: TestFunctional/serial/SoftStart (37.36s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-554000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (6.99s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-554000 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-darwin-amd64 -p functional-554000 cache add registry.k8s.io/pause:3.1: (2.558658332s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-554000 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-darwin-amd64 -p functional-554000 cache add registry.k8s.io/pause:3.3: (2.395397175s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-554000 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-darwin-amd64 -p functional-554000 cache add registry.k8s.io/pause:latest: (2.03922144s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (6.99s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.51s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-554000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalserialCacheCmdcacheadd_local1048132383/001
functional_test.go:1085: (dbg) Run:  out/minikube-darwin-amd64 -p functional-554000 cache add minikube-local-cache-test:functional-554000
functional_test.go:1085: (dbg) Done: out/minikube-darwin-amd64 -p functional-554000 cache add minikube-local-cache-test:functional-554000: (1.04100716s)
functional_test.go:1090: (dbg) Run:  out/minikube-darwin-amd64 -p functional-554000 cache delete minikube-local-cache-test:functional-554000
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-554000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.51s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-darwin-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.41s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-darwin-amd64 -p functional-554000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.41s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.65s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-darwin-amd64 -p functional-554000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-darwin-amd64 -p functional-554000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-554000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (418.643409ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-darwin-amd64 -p functional-554000 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-darwin-amd64 -p functional-554000 cache reload: (1.435275671s)
functional_test.go:1159: (dbg) Run:  out/minikube-darwin-amd64 -p functional-554000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.65s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.56s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-darwin-amd64 -p functional-554000 kubectl -- --context functional-554000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.56s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.74s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-554000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.74s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (39.97s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-554000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-darwin-amd64 start -p functional-554000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (39.970126025s)
functional_test.go:757: restart took 39.970309702s for "functional-554000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (39.97s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-554000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (3.34s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-darwin-amd64 -p functional-554000 logs
functional_test.go:1232: (dbg) Done: out/minikube-darwin-amd64 -p functional-554000 logs: (3.341868672s)
--- PASS: TestFunctional/serial/LogsCmd (3.34s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (3.25s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-darwin-amd64 -p functional-554000 logs --file /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalserialLogsFileCmd2958560597/001/logs.txt
E0717 15:15:33.897241   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/addons-230000/client.crt: no such file or directory
E0717 15:15:33.920294   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/addons-230000/client.crt: no such file or directory
E0717 15:15:33.930568   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/addons-230000/client.crt: no such file or directory
E0717 15:15:33.951379   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/addons-230000/client.crt: no such file or directory
E0717 15:15:33.991504   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/addons-230000/client.crt: no such file or directory
E0717 15:15:34.071578   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/addons-230000/client.crt: no such file or directory
E0717 15:15:34.231924   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/addons-230000/client.crt: no such file or directory
E0717 15:15:34.552097   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/addons-230000/client.crt: no such file or directory
functional_test.go:1246: (dbg) Done: out/minikube-darwin-amd64 -p functional-554000 logs --file /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalserialLogsFileCmd2958560597/001/logs.txt: (3.243942242s)
--- PASS: TestFunctional/serial/LogsFileCmd (3.25s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.06s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-554000 apply -f testdata/invalidsvc.yaml
E0717 15:15:35.193971   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/addons-230000/client.crt: no such file or directory
E0717 15:15:36.474379   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/addons-230000/client.crt: no such file or directory
functional_test.go:2331: (dbg) Run:  out/minikube-darwin-amd64 service invalid-svc -p functional-554000
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-darwin-amd64 service invalid-svc -p functional-554000: exit status 115 (562.725092ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:30830 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-554000 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.06s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-554000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-554000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-554000 config get cpus: exit status 14 (45.546648ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-554000 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-554000 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-554000 config unset cpus
E0717 15:15:39.034861   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/addons-230000/client.crt: no such file or directory
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-554000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-554000 config get cpus: exit status 14 (46.68881ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (16.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-554000 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-554000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 79503: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (16.14s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (1.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-554000 --dry-run --memory 250MB --alsologtostderr --driver=docker 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-554000 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (656.465603ms)

                                                
                                                
-- stdout --
	* [functional-554000] minikube v1.31.0 on Darwin 13.4.1
	  - MINIKUBE_LOCATION=16899
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16899-76867/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16899-76867/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 15:17:17.573822   79405 out.go:296] Setting OutFile to fd 1 ...
	I0717 15:17:17.574004   79405 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 15:17:17.574010   79405 out.go:309] Setting ErrFile to fd 2...
	I0717 15:17:17.574014   79405 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 15:17:17.574208   79405 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/16899-76867/.minikube/bin
	I0717 15:17:17.575585   79405 out.go:303] Setting JSON to false
	I0717 15:17:17.595753   79405 start.go:128] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":22605,"bootTime":1689609632,"procs":445,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0717 15:17:17.595844   79405 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0717 15:17:17.619023   79405 out.go:177] * [functional-554000] minikube v1.31.0 on Darwin 13.4.1
	I0717 15:17:17.660946   79405 out.go:177]   - MINIKUBE_LOCATION=16899
	I0717 15:17:17.660983   79405 notify.go:220] Checking for updates...
	I0717 15:17:17.703630   79405 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16899-76867/kubeconfig
	I0717 15:17:17.724893   79405 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0717 15:17:17.745839   79405 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 15:17:17.766676   79405 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16899-76867/.minikube
	I0717 15:17:17.787836   79405 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 15:17:17.809287   79405 config.go:182] Loaded profile config "functional-554000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0717 15:17:17.809680   79405 driver.go:373] Setting default libvirt URI to qemu:///system
	I0717 15:17:17.866998   79405 docker.go:121] docker version: linux-24.0.2:Docker Desktop 4.21.1 (114176)
	I0717 15:17:17.867125   79405 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 15:17:17.985741   79405 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:56 OomKillDisable:false NGoroutines:68 SystemTime:2023-07-17 22:17:17.972485014 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:5.15.49-linuxkit-pr OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexS
erverAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=built
in name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Pat
h:/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.6] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Command line tool for Docker Scout Vendor:Docker Inc. Version:0.16.1]] Warnings:<n
il>}}
	I0717 15:17:18.028408   79405 out.go:177] * Using the docker driver based on existing profile
	I0717 15:17:18.049285   79405 start.go:298] selected driver: docker
	I0717 15:17:18.049298   79405 start.go:880] validating driver "docker" against &{Name:functional-554000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:functional-554000 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mo
untMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 15:17:18.049399   79405 start.go:891] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 15:17:18.073387   79405 out.go:177] 
	W0717 15:17:18.094581   79405 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0717 15:17:18.115317   79405 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-554000 --dry-run --alsologtostderr -v=1 --driver=docker 
--- PASS: TestFunctional/parallel/DryRun (1.57s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-554000 --dry-run --memory 250MB --alsologtostderr --driver=docker 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-554000 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (624.71758ms)

                                                
                                                
-- stdout --
	* [functional-554000] minikube v1.31.0 sur Darwin 13.4.1
	  - MINIKUBE_LOCATION=16899
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16899-76867/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16899-76867/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 15:17:19.144383   79449 out.go:296] Setting OutFile to fd 1 ...
	I0717 15:17:19.144574   79449 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 15:17:19.144579   79449 out.go:309] Setting ErrFile to fd 2...
	I0717 15:17:19.144583   79449 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 15:17:19.144795   79449 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/16899-76867/.minikube/bin
	I0717 15:17:19.146379   79449 out.go:303] Setting JSON to false
	I0717 15:17:19.166041   79449 start.go:128] hostinfo: {"hostname":"MacOS-Agent-1.local","uptime":22607,"bootTime":1689609632,"procs":444,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b7610dcb-1435-5842-8d5a-b2388403fea3"}
	W0717 15:17:19.166134   79449 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0717 15:17:19.187751   79449 out.go:177] * [functional-554000] minikube v1.31.0 sur Darwin 13.4.1
	I0717 15:17:19.229918   79449 out.go:177]   - MINIKUBE_LOCATION=16899
	I0717 15:17:19.229962   79449 notify.go:220] Checking for updates...
	I0717 15:17:19.272460   79449 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16899-76867/kubeconfig
	I0717 15:17:19.293445   79449 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0717 15:17:19.314533   79449 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0717 15:17:19.335598   79449 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16899-76867/.minikube
	I0717 15:17:19.356539   79449 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0717 15:17:19.377728   79449 config.go:182] Loaded profile config "functional-554000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0717 15:17:19.378137   79449 driver.go:373] Setting default libvirt URI to qemu:///system
	I0717 15:17:19.435400   79449 docker.go:121] docker version: linux-24.0.2:Docker Desktop 4.21.1 (114176)
	I0717 15:17:19.435556   79449 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0717 15:17:19.540742   79449 info.go:266] docker info: {ID:37b3081a-8cd6-457e-b2a4-79dc82345b06 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:56 OomKillDisable:false NGoroutines:68 SystemTime:2023-07-17 22:17:19.5290462 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:8 KernelVersion:5.15.49-linuxkit-pr OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexSer
verAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:24.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3dce8eb055cbb6872793272b4f20ed16117344f8 Expected:3dce8eb055cbb6872793272b4f20ed16117344f8} RuncCommit:{ID:v1.1.7-0-g860f061 Expected:v1.1.7-0-g860f061} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=builtin
name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.19.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.0] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.20] map[Name:init Path:
/Users/jenkins/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-init] ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v0.1.0-beta.6] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.26.0] map[Name:scout Path:/Users/jenkins/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scout] ShortDescription:Command line tool for Docker Scout Vendor:Docker Inc. Version:0.16.1]] Warnings:<nil
>}}
	I0717 15:17:19.600169   79449 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0717 15:17:19.622061   79449 start.go:298] selected driver: docker
	I0717 15:17:19.622103   79449 start.go:880] validating driver "docker" against &{Name:functional-554000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:functional-554000 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mo
untMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0717 15:17:19.622225   79449 start.go:891] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0717 15:17:19.646420   79449 out.go:177] 
	W0717 15:17:19.668422   79449 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0717 15:17:19.690244   79449 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-darwin-amd64 -p functional-554000 status
functional_test.go:856: (dbg) Run:  out/minikube-darwin-amd64 -p functional-554000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-darwin-amd64 -p functional-554000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.19s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-darwin-amd64 -p functional-554000 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-darwin-amd64 -p functional-554000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (29.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [ff5802e2-762f-4a89-93c8-d8b88be3207d] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.009139359s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-554000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-554000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-554000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-554000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [d6e32f7f-c900-4e73-98c8-38696746e22a] Pending
helpers_test.go:344: "sp-pod" [d6e32f7f-c900-4e73-98c8-38696746e22a] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [d6e32f7f-c900-4e73-98c8-38696746e22a] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 16.009070304s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-554000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-554000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-554000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [95243822-1413-4754-bc30-087f080faa08] Pending
helpers_test.go:344: "sp-pod" [95243822-1413-4754-bc30-087f080faa08] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [95243822-1413-4754-bc30-087f080faa08] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.009129616s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-554000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (29.38s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-darwin-amd64 -p functional-554000 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-darwin-amd64 -p functional-554000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.73s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-554000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-554000 ssh -n functional-554000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-554000 cp functional-554000:/home/docker/cp-test.txt /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelCpCmd331128665/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-554000 ssh -n functional-554000 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.64s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (41.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-554000 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-7db894d786-wv87k" [fe6b1692-13e9-4e4c-a9d3-e2e275f37a61] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-7db894d786-wv87k" [fe6b1692-13e9-4e4c-a9d3-e2e275f37a61] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 36.014172391s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-554000 exec mysql-7db894d786-wv87k -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-554000 exec mysql-7db894d786-wv87k -- mysql -ppassword -e "show databases;": exit status 1 (180.815164ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-554000 exec mysql-7db894d786-wv87k -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-554000 exec mysql-7db894d786-wv87k -- mysql -ppassword -e "show databases;": exit status 1 (147.444152ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-554000 exec mysql-7db894d786-wv87k -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-554000 exec mysql-7db894d786-wv87k -- mysql -ppassword -e "show databases;": exit status 1 (130.382227ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-554000 exec mysql-7db894d786-wv87k -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (41.62s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/77324/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-amd64 -p functional-554000 ssh "sudo cat /etc/test/nested/copy/77324/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/77324.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-amd64 -p functional-554000 ssh "sudo cat /etc/ssl/certs/77324.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/77324.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-amd64 -p functional-554000 ssh "sudo cat /usr/share/ca-certificates/77324.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-amd64 -p functional-554000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/773242.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-amd64 -p functional-554000 ssh "sudo cat /etc/ssl/certs/773242.pem"
E0717 15:15:44.155241   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/addons-230000/client.crt: no such file or directory
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/773242.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-amd64 -p functional-554000 ssh "sudo cat /usr/share/ca-certificates/773242.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-amd64 -p functional-554000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.44s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-554000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-darwin-amd64 -p functional-554000 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-554000 ssh "sudo systemctl is-active crio": exit status 1 (367.464683ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-darwin-amd64 -p functional-554000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-darwin-amd64 -p functional-554000 version -o=json --components
functional_test.go:2266: (dbg) Done: out/minikube-darwin-amd64 -p functional-554000 version -o=json --components: (1.097384241s)
--- PASS: TestFunctional/parallel/Version/components (1.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-554000 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-554000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.27.3
registry.k8s.io/kube-proxy:v1.27.3
registry.k8s.io/kube-controller-manager:v1.27.3
registry.k8s.io/kube-apiserver:v1.27.3
registry.k8s.io/etcd:3.5.7-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-554000
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-554000
docker.io/kindest/kindnetd:v20230511-dc714da8
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-554000 image ls --format short --alsologtostderr:
I0717 15:17:29.007015   79702 out.go:296] Setting OutFile to fd 1 ...
I0717 15:17:29.007222   79702 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0717 15:17:29.007228   79702 out.go:309] Setting ErrFile to fd 2...
I0717 15:17:29.007234   79702 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0717 15:17:29.007435   79702 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/16899-76867/.minikube/bin
I0717 15:17:29.008179   79702 config.go:182] Loaded profile config "functional-554000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.27.3
I0717 15:17:29.008281   79702 config.go:182] Loaded profile config "functional-554000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.27.3
I0717 15:17:29.008726   79702 cli_runner.go:164] Run: docker container inspect functional-554000 --format={{.State.Status}}
I0717 15:17:29.064160   79702 ssh_runner.go:195] Run: systemctl --version
I0717 15:17:29.064228   79702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-554000
I0717 15:17:29.119004   79702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53454 SSHKeyPath:/Users/jenkins/minikube-integration/16899-76867/.minikube/machines/functional-554000/id_rsa Username:docker}
I0717 15:17:29.275587   79702 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-554000 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-554000 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| docker.io/library/nginx                     | alpine             | 4937520ae206c | 41.4MB |
| registry.k8s.io/etcd                        | 3.5.7-0            | 86b6af7dd652c | 296MB  |
| registry.k8s.io/pause                       | 3.9                | e6f1816883972 | 744kB  |
| registry.k8s.io/kube-controller-manager     | v1.27.3            | 7cffc01dba0e1 | 112MB  |
| registry.k8s.io/pause                       | 3.3                | 0184c1613d929 | 683kB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | 56cc512116c8f | 4.4MB  |
| registry.k8s.io/echoserver                  | 1.8                | 82e4c8a736a4f | 95.4MB |
| docker.io/library/nginx                     | latest             | 021283c8eb95b | 187MB  |
| docker.io/library/mysql                     | 5.7                | 2be84dd575ee2 | 569MB  |
| registry.k8s.io/coredns/coredns             | v1.10.1            | ead0a4a53df89 | 53.6MB |
| gcr.io/google-containers/addon-resizer      | functional-554000  | ffd4cfbbe753e | 32.9MB |
| registry.k8s.io/pause                       | latest             | 350b164e7ae1d | 240kB  |
| docker.io/library/minikube-local-cache-test | functional-554000  | c7deaba5b9726 | 30B    |
| registry.k8s.io/kube-apiserver              | v1.27.3            | 08a0c939e61b7 | 121MB  |
| registry.k8s.io/kube-proxy                  | v1.27.3            | 5780543258cf0 | 71.1MB |
| registry.k8s.io/kube-scheduler              | v1.27.3            | 41697ceeb70b3 | 58.4MB |
| docker.io/kindest/kindnetd                  | v20230511-dc714da8 | b0b1fa0f58c6e | 63.6MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/pause                       | 3.1                | da86e6ba6ca19 | 742kB  |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-554000 image ls --format table --alsologtostderr:
I0717 15:17:30.070666   79720 out.go:296] Setting OutFile to fd 1 ...
I0717 15:17:30.071035   79720 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0717 15:17:30.071042   79720 out.go:309] Setting ErrFile to fd 2...
I0717 15:17:30.071047   79720 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0717 15:17:30.071347   79720 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/16899-76867/.minikube/bin
I0717 15:17:30.072053   79720 config.go:182] Loaded profile config "functional-554000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.27.3
I0717 15:17:30.072150   79720 config.go:182] Loaded profile config "functional-554000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.27.3
I0717 15:17:30.072644   79720 cli_runner.go:164] Run: docker container inspect functional-554000 --format={{.State.Status}}
I0717 15:17:30.130331   79720 ssh_runner.go:195] Run: systemctl --version
I0717 15:17:30.130412   79720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-554000
I0717 15:17:30.184236   79720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53454 SSHKeyPath:/Users/jenkins/minikube-integration/16899-76867/.minikube/machines/functional-554000/id_rsa Username:docker}
I0717 15:17:30.278603   79720 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-554000 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-554000 image ls --format json --alsologtostderr:
[{"id":"c7deaba5b97268bc05e01fdabe5fb0a93ee6cba26dfdcd1acbfb4c8b406994c1","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-554000"],"size":"30"},{"id":"08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.27.3"],"size":"121000000"},{"id":"5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.27.3"],"size":"71100000"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"744000"},{"id":"41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.27.3"],"size":"58400000"},{"id":"b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da","repoDigests":[],"repoTags":["docker.io/kindest/kindnetd:v20230511-dc714da8"],"size":"63600000"},{"id":"86b6af7dd652c1b38118be1c
338e9354b33469e69a218f7e290a0ca5304ad681","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.7-0"],"size":"296000000"},{"id":"021283c8eb95be02b23db0de7f609d603553c6714785e7a673c6594a624ffbda","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"187000000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-554000"],"size":"32900000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"4937520ae206c8969734d9a659fc1e6594d9b22b9340bf0796defbea0c92dd02","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"41400000"},{"id":"7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f","repoDigests":[],"repoTa
gs":["registry.k8s.io/kube-controller-manager:v1.27.3"],"size":"112000000"},{"id":"2be84dd575ee2ecdb186dc43a9cd951890a764d2cefbd31a72cdf4410c43a2d0","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"569000000"},{"id":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"53600000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-554000 image ls --format json --alsologtostderr:
I0717 15:17:29.688347   79714 out.go:296] Setting OutFile to fd 1 ...
I0717 15:17:29.688616   79714 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0717 15:17:29.688621   79714 out.go:309] Setting ErrFile to fd 2...
I0717 15:17:29.688625   79714 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0717 15:17:29.688882   79714 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/16899-76867/.minikube/bin
I0717 15:17:29.689531   79714 config.go:182] Loaded profile config "functional-554000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.27.3
I0717 15:17:29.689625   79714 config.go:182] Loaded profile config "functional-554000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.27.3
I0717 15:17:29.690004   79714 cli_runner.go:164] Run: docker container inspect functional-554000 --format={{.State.Status}}
I0717 15:17:29.747185   79714 ssh_runner.go:195] Run: systemctl --version
I0717 15:17:29.747265   79714 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-554000
I0717 15:17:29.805255   79714 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53454 SSHKeyPath:/Users/jenkins/minikube-integration/16899-76867/.minikube/machines/functional-554000/id_rsa Username:docker}
I0717 15:17:29.974957   79714 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-554000 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-554000 image ls --format yaml --alsologtostderr:
- id: 021283c8eb95be02b23db0de7f609d603553c6714785e7a673c6594a624ffbda
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "187000000"
- id: 08a0c939e61b7340db53ebf07b4d0e908a35ad8d94e2cb7d0f958210e567079a
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.27.3
size: "121000000"
- id: 5780543258cf06f98595c003c0c6d22768d1fc8e9852e2839018a4bb3bfe163c
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.27.3
size: "71100000"
- id: 2be84dd575ee2ecdb186dc43a9cd951890a764d2cefbd31a72cdf4410c43a2d0
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "569000000"
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53600000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 7cffc01dba0e151e525544f87958d12c0fa62a9f173bbc930200ce815f2aaf3f
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.27.3
size: "112000000"
- id: 86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.7-0
size: "296000000"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: c7deaba5b97268bc05e01fdabe5fb0a93ee6cba26dfdcd1acbfb4c8b406994c1
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-554000
size: "30"
- id: 4937520ae206c8969734d9a659fc1e6594d9b22b9340bf0796defbea0c92dd02
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "41400000"
- id: 41697ceeb70b3f49e54ed46f2cf27ac5b3a201a7d9668ca327588b23fafdf36a
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.27.3
size: "58400000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-554000
size: "32900000"
- id: b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da
repoDigests: []
repoTags:
- docker.io/kindest/kindnetd:v20230511-dc714da8
size: "63600000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-554000 image ls --format yaml --alsologtostderr:
I0717 15:17:29.368907   79708 out.go:296] Setting OutFile to fd 1 ...
I0717 15:17:29.369150   79708 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0717 15:17:29.369155   79708 out.go:309] Setting ErrFile to fd 2...
I0717 15:17:29.369159   79708 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0717 15:17:29.369427   79708 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/16899-76867/.minikube/bin
I0717 15:17:29.370445   79708 config.go:182] Loaded profile config "functional-554000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.27.3
I0717 15:17:29.370552   79708 config.go:182] Loaded profile config "functional-554000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.27.3
I0717 15:17:29.370977   79708 cli_runner.go:164] Run: docker container inspect functional-554000 --format={{.State.Status}}
I0717 15:17:29.427695   79708 ssh_runner.go:195] Run: systemctl --version
I0717 15:17:29.427775   79708 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-554000
I0717 15:17:29.485622   79708 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53454 SSHKeyPath:/Users/jenkins/minikube-integration/16899-76867/.minikube/machines/functional-554000/id_rsa Username:docker}
I0717 15:17:29.580029   79708 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-darwin-amd64 -p functional-554000 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-554000 ssh pgrep buildkitd: exit status 1 (440.358794ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 -p functional-554000 image build -t localhost/my-image:functional-554000 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-darwin-amd64 -p functional-554000 image build -t localhost/my-image:functional-554000 testdata/build --alsologtostderr: (2.849632628s)
functional_test.go:322: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-554000 image build -t localhost/my-image:functional-554000 testdata/build --alsologtostderr:
I0717 15:17:30.820392   79736 out.go:296] Setting OutFile to fd 1 ...
I0717 15:17:30.820583   79736 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0717 15:17:30.820590   79736 out.go:309] Setting ErrFile to fd 2...
I0717 15:17:30.820595   79736 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0717 15:17:30.820797   79736 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/16899-76867/.minikube/bin
I0717 15:17:30.821529   79736 config.go:182] Loaded profile config "functional-554000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.27.3
I0717 15:17:30.822210   79736 config.go:182] Loaded profile config "functional-554000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.27.3
I0717 15:17:30.822663   79736 cli_runner.go:164] Run: docker container inspect functional-554000 --format={{.State.Status}}
I0717 15:17:30.876578   79736 ssh_runner.go:195] Run: systemctl --version
I0717 15:17:30.876653   79736 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-554000
I0717 15:17:30.932937   79736 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:53454 SSHKeyPath:/Users/jenkins/minikube-integration/16899-76867/.minikube/machines/functional-554000/id_rsa Username:docker}
I0717 15:17:31.076085   79736 build_images.go:151] Building image from path: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/build.1612067957.tar
I0717 15:17:31.076184   79736 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0717 15:17:31.092931   79736 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1612067957.tar
I0717 15:17:31.097648   79736 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1612067957.tar: stat -c "%s %y" /var/lib/minikube/build/build.1612067957.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1612067957.tar': No such file or directory
I0717 15:17:31.097689   79736 ssh_runner.go:362] scp /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/build.1612067957.tar --> /var/lib/minikube/build/build.1612067957.tar (3072 bytes)
I0717 15:17:31.171013   79736 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1612067957
I0717 15:17:31.182215   79736 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1612067957 -xf /var/lib/minikube/build/build.1612067957.tar
I0717 15:17:31.192079   79736 docker.go:339] Building image: /var/lib/minikube/build/build.1612067957
I0717 15:17:31.192160   79736 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-554000 /var/lib/minikube/build/build.1612067957
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load .dockerignore
#1 transferring context: 2B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load build definition from Dockerfile
#2 transferring dockerfile: 97B done
#2 DONE 0.0s

                                                
                                                
#3 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#3 DONE 1.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.1s
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.6s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.0s done
#5 DONE 0.7s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.2s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:39d42136f460a97379a3f1b54cda0073a563df9e94054106182689df56b5021a done
#8 naming to localhost/my-image:functional-554000 done
#8 DONE 0.0s
I0717 15:17:33.572403   79736 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-554000 /var/lib/minikube/build/build.1612067957: (2.380173536s)
I0717 15:17:33.572468   79736 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1612067957
I0717 15:17:33.584716   79736 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1612067957.tar
I0717 15:17:33.594733   79736 build_images.go:207] Built localhost/my-image:functional-554000 from /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/build.1612067957.tar
I0717 15:17:33.594763   79736 build_images.go:123] succeeded building to: functional-554000
I0717 15:17:33.594775   79736 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-554000 image ls
2023/07/17 15:17:35 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.68s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (3.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (3.085176263s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-554000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (3.15s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (2.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-554000 docker-env) && out/minikube-darwin-amd64 status -p functional-554000"
functional_test.go:495: (dbg) Done: /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-554000 docker-env) && out/minikube-darwin-amd64 status -p functional-554000": (1.331545806s)
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-554000 docker-env) && docker images"
functional_test.go:518: (dbg) Done: /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-554000 docker-env) && docker images": (1.018670896s)
--- PASS: TestFunctional/parallel/DockerEnv/bash (2.35s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-554000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-554000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-554000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-darwin-amd64 -p functional-554000 image load --daemon gcr.io/google-containers/addon-resizer:functional-554000 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-darwin-amd64 -p functional-554000 image load --daemon gcr.io/google-containers/addon-resizer:functional-554000 --alsologtostderr: (4.288063723s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-554000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.58s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-darwin-amd64 -p functional-554000 image load --daemon gcr.io/google-containers/addon-resizer:functional-554000 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-darwin-amd64 -p functional-554000 image load --daemon gcr.io/google-containers/addon-resizer:functional-554000 --alsologtostderr: (2.616585941s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-554000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.99s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (8.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (3.040309627s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-554000
functional_test.go:244: (dbg) Run:  out/minikube-darwin-amd64 -p functional-554000 image load --daemon gcr.io/google-containers/addon-resizer:functional-554000 --alsologtostderr
E0717 15:15:54.395659   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/addons-230000/client.crt: no such file or directory
functional_test.go:244: (dbg) Done: out/minikube-darwin-amd64 -p functional-554000 image load --daemon gcr.io/google-containers/addon-resizer:functional-554000 --alsologtostderr: (4.625203016s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-554000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (8.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-darwin-amd64 -p functional-554000 image save gcr.io/google-containers/addon-resizer:functional-554000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-darwin-amd64 -p functional-554000 image save gcr.io/google-containers/addon-resizer:functional-554000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr: (1.939632574s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.94s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-darwin-amd64 -p functional-554000 image rm gcr.io/google-containers/addon-resizer:functional-554000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-554000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.86s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-darwin-amd64 -p functional-554000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-darwin-amd64 -p functional-554000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr: (2.499836018s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-554000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.80s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (3.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-554000
functional_test.go:423: (dbg) Run:  out/minikube-darwin-amd64 -p functional-554000 image save --daemon gcr.io/google-containers/addon-resizer:functional-554000 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-darwin-amd64 -p functional-554000 image save --daemon gcr.io/google-containers/addon-resizer:functional-554000 --alsologtostderr: (2.928585395s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-554000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (3.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (17.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1438: (dbg) Run:  kubectl --context functional-554000 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-554000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-775766b4cc-jj6zk" [925f502e-e588-45f3-917f-72fd6b1495cf] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
E0717 15:16:14.876482   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/addons-230000/client.crt: no such file or directory
helpers_test.go:344: "hello-node-775766b4cc-jj6zk" [925f502e-e588-45f3-917f-72fd6b1495cf] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 17.008940408s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (17.15s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-darwin-amd64 -p functional-554000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-darwin-amd64 -p functional-554000 service list -o json
functional_test.go:1493: Took "436.468669ms" to run "out/minikube-darwin-amd64 -p functional-554000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-darwin-amd64 -p functional-554000 service --namespace=default --https --url hello-node
functional_test.go:1508: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-554000 service --namespace=default --https --url hello-node: signal: killed (15.002265634s)

                                                
                                                
-- stdout --
	https://127.0.0.1:53713

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on darwin, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
functional_test.go:1521: found endpoint: https://127.0.0.1:53713
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (15.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-554000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-554000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-554000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 79181: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-554000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-554000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-554000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [950e646d-e30e-4c50-a221-b7ae6d2f8657] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [950e646d-e30e-4c50-a221-b7ae6d2f8657] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 11.009165569s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.22s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-554000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://127.0.0.1 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-amd64 -p functional-554000 tunnel --alsologtostderr] ...
helpers_test.go:502: unable to terminate pid 79210: operation not permitted
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-darwin-amd64 -p functional-554000 service hello-node --url --format={{.IP}}
functional_test.go:1539: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-554000 service hello-node --url --format={{.IP}}: signal: killed (15.004409279s)

                                                
                                                
-- stdout --
	127.0.0.1

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on darwin, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ServiceCmd/Format (15.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-darwin-amd64 -p functional-554000 service hello-node --url
E0717 15:16:55.838484   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/addons-230000/client.crt: no such file or directory
functional_test.go:1558: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-554000 service hello-node --url: signal: killed (15.001599001s)

                                                
                                                
-- stdout --
	http://127.0.0.1:53786

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on darwin, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
functional_test.go:1564: found endpoint for hello-node: http://127.0.0.1:53786
--- PASS: TestFunctional/parallel/ServiceCmd/URL (15.00s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-darwin-amd64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-darwin-amd64 profile list
functional_test.go:1314: Took "391.349059ms" to run "out/minikube-darwin-amd64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-darwin-amd64 profile list -l
functional_test.go:1328: Took "67.013952ms" to run "out/minikube-darwin-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json
functional_test.go:1365: Took "393.921463ms" to run "out/minikube-darwin-amd64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json --light
functional_test.go:1378: Took "67.601979ms" to run "out/minikube-darwin-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-554000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port58961400/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1689632232899962000" to /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port58961400/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1689632232899962000" to /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port58961400/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1689632232899962000" to /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port58961400/001/test-1689632232899962000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-554000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-554000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (379.169395ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-554000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-amd64 -p functional-554000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jul 17 22:17 created-by-test
-rw-r--r-- 1 docker docker 24 Jul 17 22:17 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jul 17 22:17 test-1689632232899962000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 -p functional-554000 ssh cat /mount-9p/test-1689632232899962000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-554000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [9d7bd525-b502-4f5e-a662-40e5f14599b9] Pending
helpers_test.go:344: "busybox-mount" [9d7bd525-b502-4f5e-a662-40e5f14599b9] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [9d7bd525-b502-4f5e-a662-40e5f14599b9] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [9d7bd525-b502-4f5e-a662-40e5f14599b9] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.01020656s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-554000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 -p functional-554000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 -p functional-554000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-amd64 -p functional-554000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-554000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdany-port58961400/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.95s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-554000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdspecific-port768868579/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-554000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-554000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (388.770474ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-554000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-amd64 -p functional-554000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-554000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdspecific-port768868579/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-amd64 -p functional-554000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-554000 ssh "sudo umount -f /mount-9p": exit status 1 (398.777205ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-amd64 -p functional-554000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-554000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdspecific-port768868579/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.29s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-554000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3261839886/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-554000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3261839886/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-554000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3261839886/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-554000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-554000 ssh "findmnt -T" /mount1: exit status 1 (652.171488ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-554000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-554000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-554000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-amd64 mount -p functional-554000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-554000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3261839886/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-554000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3261839886/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-554000 /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3261839886/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.82s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.2s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-554000
--- PASS: TestFunctional/delete_addon-resizer_images (0.20s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-554000
--- PASS: TestFunctional/delete_my-image_image (0.05s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-554000
--- PASS: TestFunctional/delete_minikube_cached_images (0.05s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (23.28s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-amd64 start -p image-532000 --driver=docker 
image_test.go:69: (dbg) Done: out/minikube-darwin-amd64 start -p image-532000 --driver=docker : (23.2820206s)
--- PASS: TestImageBuild/serial/Setup (23.28s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (2.67s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-532000
image_test.go:78: (dbg) Done: out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-532000: (2.66908765s)
--- PASS: TestImageBuild/serial/NormalBuild (2.67s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.86s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-532000
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.86s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.69s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-532000
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.69s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.7s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-532000
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.70s)

                                                
                                    
x
+
TestJSONOutput/start/Command (50.23s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-371000 --output=json --user=testUser --memory=2200 --wait=true --driver=docker 
E0717 15:26:13.489775   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/functional-554000/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 start -p json-output-371000 --output=json --user=testUser --memory=2200 --wait=true --driver=docker : (50.229826277s)
--- PASS: TestJSONOutput/start/Command (50.23s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.6s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 pause -p json-output-371000 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.60s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.61s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 unpause -p json-output-371000 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.61s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (10.93s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 stop -p json-output-371000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 stop -p json-output-371000 --output=json --user=testUser: (10.929750974s)
--- PASS: TestJSONOutput/stop/Command (10.93s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.71s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-error-773000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p json-output-error-773000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (344.729924ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"4349f8f1-41c0-4e0b-b867-d5a446f57e0f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-773000] minikube v1.31.0 on Darwin 13.4.1","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"e76fe852-1200-4fd9-a72d-99f7646ad46f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=16899"}}
	{"specversion":"1.0","id":"f8e5b968-2f5f-48c7-8b11-6cbd13d84a45","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/16899-76867/kubeconfig"}}
	{"specversion":"1.0","id":"69dacd96-a12e-401f-8aba-68f514b03e92","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"1747b65d-d44e-4b1e-9404-7a9b5cd8fbce","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"3fa30434-0e9a-4883-b55c-bd5d3c5215c1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/16899-76867/.minikube"}}
	{"specversion":"1.0","id":"50687b28-b10a-406d-b10f-1dca2319aca3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"eaaea9d8-96cd-44be-b82f-55e16a8348d4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-773000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p json-output-error-773000
--- PASS: TestErrorJSONOutput (0.71s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (25s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-589000 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-589000 --network=: (22.448609082s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-589000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-589000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-589000: (2.499033357s)
--- PASS: TestKicCustomNetwork/create_custom_network (25.00s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (24.37s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-585000 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-585000 --network=bridge: (22.00318925s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-585000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-585000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-585000: (2.318852585s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (24.37s)

                                                
                                    
x
+
TestKicExistingNetwork (24.36s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-darwin-amd64 start -p existing-network-988000 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-darwin-amd64 start -p existing-network-988000 --network=existing-network: (21.720148501s)
helpers_test.go:175: Cleaning up "existing-network-988000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p existing-network-988000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p existing-network-988000: (2.30490461s)
--- PASS: TestKicExistingNetwork (24.36s)

                                                
                                    
x
+
TestKicCustomSubnet (24.8s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p custom-subnet-266000 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p custom-subnet-266000 --subnet=192.168.60.0/24: (22.278796064s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-266000 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-266000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p custom-subnet-266000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p custom-subnet-266000: (2.469746995s)
--- PASS: TestKicCustomSubnet (24.80s)

                                                
                                    
x
+
TestKicStaticIP (24.17s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 start -p static-ip-455000 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-darwin-amd64 start -p static-ip-455000 --static-ip=192.168.200.200: (21.46396515s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-darwin-amd64 -p static-ip-455000 ip
helpers_test.go:175: Cleaning up "static-ip-455000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p static-ip-455000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p static-ip-455000: (2.482548612s)
--- PASS: TestKicStaticIP (24.17s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (52.53s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p first-842000 --driver=docker 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p first-842000 --driver=docker : (23.240605734s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p second-845000 --driver=docker 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p second-845000 --driver=docker : (22.661005708s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile first-842000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile second-845000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-845000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p second-845000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p second-845000: (2.475903492s)
helpers_test.go:175: Cleaning up "first-842000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p first-842000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p first-842000: (2.457502267s)
--- PASS: TestMinikubeProfile (52.53s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.97s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-1-521000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-1-521000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker : (6.968227122s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.97s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-1-521000 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (8.27s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-534000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-534000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker : (7.270409967s)
--- PASS: TestMountStart/serial/StartWithMountSecond (8.27s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-534000 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.36s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (2.08s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 delete -p mount-start-1-521000 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-darwin-amd64 delete -p mount-start-1-521000 --alsologtostderr -v=5: (2.080124026s)
--- PASS: TestMountStart/serial/DeleteFirst (2.08s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-534000 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.55s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-darwin-amd64 stop -p mount-start-2-534000
mount_start_test.go:155: (dbg) Done: out/minikube-darwin-amd64 stop -p mount-start-2-534000: (1.549130692s)
--- PASS: TestMountStart/serial/Stop (1.55s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (9.15s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-534000
mount_start_test.go:166: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-534000: (8.152968873s)
--- PASS: TestMountStart/serial/RestartStopped (9.15s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-534000 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.36s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (64.6s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-748000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker 
E0717 15:30:33.907048   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/addons-230000/client.crt: no such file or directory
E0717 15:30:45.778856   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/functional-554000/client.crt: no such file or directory
multinode_test.go:85: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-748000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker : (1m3.826623077s)
multinode_test.go:91: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-748000 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (64.60s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (52.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-748000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:486: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-748000 -- rollout status deployment/busybox
multinode_test.go:486: (dbg) Done: out/minikube-darwin-amd64 kubectl -p multinode-748000 -- rollout status deployment/busybox: (3.821194125s)
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-748000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:502: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-748000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:502: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-748000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:502: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-748000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:502: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-748000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:502: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-748000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:502: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
E0717 15:31:56.951634   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/addons-230000/client.crt: no such file or directory
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-748000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:502: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-748000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:502: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-748000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:516: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-748000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:524: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-748000 -- exec busybox-67b7f59bb-ff526 -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-748000 -- exec busybox-67b7f59bb-tc86q -- nslookup kubernetes.io
multinode_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-748000 -- exec busybox-67b7f59bb-ff526 -- nslookup kubernetes.default
multinode_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-748000 -- exec busybox-67b7f59bb-tc86q -- nslookup kubernetes.default
multinode_test.go:542: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-748000 -- exec busybox-67b7f59bb-ff526 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-748000 -- exec busybox-67b7f59bb-tc86q -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (52.08s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-748000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:560: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-748000 -- exec busybox-67b7f59bb-ff526 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-748000 -- exec busybox-67b7f59bb-ff526 -- sh -c "ping -c 1 192.168.65.254"
multinode_test.go:560: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-748000 -- exec busybox-67b7f59bb-tc86q -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-748000 -- exec busybox-67b7f59bb-tc86q -- sh -c "ping -c 1 192.168.65.254"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.86s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (16.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-748000 -v 3 --alsologtostderr
multinode_test.go:110: (dbg) Done: out/minikube-darwin-amd64 node add -p multinode-748000 -v 3 --alsologtostderr: (15.319782793s)
multinode_test.go:116: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-748000 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (16.31s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.40s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (13.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-748000 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-748000 cp testdata/cp-test.txt multinode-748000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-748000 ssh -n multinode-748000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-748000 cp multinode-748000:/home/docker/cp-test.txt /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestMultiNodeserialCopyFile2673183797/001/cp-test_multinode-748000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-748000 ssh -n multinode-748000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-748000 cp multinode-748000:/home/docker/cp-test.txt multinode-748000-m02:/home/docker/cp-test_multinode-748000_multinode-748000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-748000 ssh -n multinode-748000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-748000 ssh -n multinode-748000-m02 "sudo cat /home/docker/cp-test_multinode-748000_multinode-748000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-748000 cp multinode-748000:/home/docker/cp-test.txt multinode-748000-m03:/home/docker/cp-test_multinode-748000_multinode-748000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-748000 ssh -n multinode-748000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-748000 ssh -n multinode-748000-m03 "sudo cat /home/docker/cp-test_multinode-748000_multinode-748000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-748000 cp testdata/cp-test.txt multinode-748000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-748000 ssh -n multinode-748000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-748000 cp multinode-748000-m02:/home/docker/cp-test.txt /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestMultiNodeserialCopyFile2673183797/001/cp-test_multinode-748000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-748000 ssh -n multinode-748000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-748000 cp multinode-748000-m02:/home/docker/cp-test.txt multinode-748000:/home/docker/cp-test_multinode-748000-m02_multinode-748000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-748000 ssh -n multinode-748000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-748000 ssh -n multinode-748000 "sudo cat /home/docker/cp-test_multinode-748000-m02_multinode-748000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-748000 cp multinode-748000-m02:/home/docker/cp-test.txt multinode-748000-m03:/home/docker/cp-test_multinode-748000-m02_multinode-748000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-748000 ssh -n multinode-748000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-748000 ssh -n multinode-748000-m03 "sudo cat /home/docker/cp-test_multinode-748000-m02_multinode-748000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-748000 cp testdata/cp-test.txt multinode-748000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-748000 ssh -n multinode-748000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-748000 cp multinode-748000-m03:/home/docker/cp-test.txt /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestMultiNodeserialCopyFile2673183797/001/cp-test_multinode-748000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-748000 ssh -n multinode-748000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-748000 cp multinode-748000-m03:/home/docker/cp-test.txt multinode-748000:/home/docker/cp-test_multinode-748000-m03_multinode-748000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-748000 ssh -n multinode-748000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-748000 ssh -n multinode-748000 "sudo cat /home/docker/cp-test_multinode-748000-m03_multinode-748000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-748000 cp multinode-748000-m03:/home/docker/cp-test.txt multinode-748000-m02:/home/docker/cp-test_multinode-748000-m03_multinode-748000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-748000 ssh -n multinode-748000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-748000 ssh -n multinode-748000-m02 "sudo cat /home/docker/cp-test_multinode-748000-m03_multinode-748000-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (13.69s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-748000 node stop m03
multinode_test.go:210: (dbg) Done: out/minikube-darwin-amd64 -p multinode-748000 node stop m03: (1.500620535s)
multinode_test.go:216: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-748000 status
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-748000 status: exit status 7 (704.099433ms)

                                                
                                                
-- stdout --
	multinode-748000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-748000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-748000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:223: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-748000 status --alsologtostderr
multinode_test.go:223: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-748000 status --alsologtostderr: exit status 7 (711.534183ms)

                                                
                                                
-- stdout --
	multinode-748000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-748000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-748000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 15:33:02.467792   82782 out.go:296] Setting OutFile to fd 1 ...
	I0717 15:33:02.467979   82782 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 15:33:02.467986   82782 out.go:309] Setting ErrFile to fd 2...
	I0717 15:33:02.467992   82782 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 15:33:02.468192   82782 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/16899-76867/.minikube/bin
	I0717 15:33:02.468383   82782 out.go:303] Setting JSON to false
	I0717 15:33:02.468405   82782 mustload.go:65] Loading cluster: multinode-748000
	I0717 15:33:02.468454   82782 notify.go:220] Checking for updates...
	I0717 15:33:02.468723   82782 config.go:182] Loaded profile config "multinode-748000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0717 15:33:02.468734   82782 status.go:255] checking status of multinode-748000 ...
	I0717 15:33:02.469204   82782 cli_runner.go:164] Run: docker container inspect multinode-748000 --format={{.State.Status}}
	I0717 15:33:02.521190   82782 status.go:330] multinode-748000 host status = "Running" (err=<nil>)
	I0717 15:33:02.521220   82782 host.go:66] Checking if "multinode-748000" exists ...
	I0717 15:33:02.521472   82782 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-748000
	I0717 15:33:02.574120   82782 host.go:66] Checking if "multinode-748000" exists ...
	I0717 15:33:02.574378   82782 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 15:33:02.574440   82782 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-748000
	I0717 15:33:02.626674   82782 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54328 SSHKeyPath:/Users/jenkins/minikube-integration/16899-76867/.minikube/machines/multinode-748000/id_rsa Username:docker}
	I0717 15:33:02.716797   82782 ssh_runner.go:195] Run: systemctl --version
	I0717 15:33:02.721951   82782 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 15:33:02.732760   82782 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-748000
	I0717 15:33:02.784908   82782 kubeconfig.go:92] found "multinode-748000" server: "https://127.0.0.1:54332"
	I0717 15:33:02.784936   82782 api_server.go:166] Checking apiserver status ...
	I0717 15:33:02.784984   82782 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0717 15:33:02.796634   82782 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2110/cgroup
	W0717 15:33:02.806286   82782 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2110/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0717 15:33:02.806340   82782 ssh_runner.go:195] Run: ls
	I0717 15:33:02.810945   82782 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:54332/healthz ...
	I0717 15:33:02.817781   82782 api_server.go:279] https://127.0.0.1:54332/healthz returned 200:
	ok
	I0717 15:33:02.817796   82782 status.go:421] multinode-748000 apiserver status = Running (err=<nil>)
	I0717 15:33:02.817805   82782 status.go:257] multinode-748000 status: &{Name:multinode-748000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0717 15:33:02.817816   82782 status.go:255] checking status of multinode-748000-m02 ...
	I0717 15:33:02.818067   82782 cli_runner.go:164] Run: docker container inspect multinode-748000-m02 --format={{.State.Status}}
	I0717 15:33:02.872286   82782 status.go:330] multinode-748000-m02 host status = "Running" (err=<nil>)
	I0717 15:33:02.872320   82782 host.go:66] Checking if "multinode-748000-m02" exists ...
	I0717 15:33:02.872612   82782 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-748000-m02
	I0717 15:33:02.925586   82782 host.go:66] Checking if "multinode-748000-m02" exists ...
	I0717 15:33:02.925873   82782 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0717 15:33:02.925927   82782 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-748000-m02
	I0717 15:33:02.980901   82782 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:54366 SSHKeyPath:/Users/jenkins/minikube-integration/16899-76867/.minikube/machines/multinode-748000-m02/id_rsa Username:docker}
	I0717 15:33:03.071923   82782 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0717 15:33:03.083662   82782 status.go:257] multinode-748000-m02 status: &{Name:multinode-748000-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0717 15:33:03.083684   82782 status.go:255] checking status of multinode-748000-m03 ...
	I0717 15:33:03.083956   82782 cli_runner.go:164] Run: docker container inspect multinode-748000-m03 --format={{.State.Status}}
	I0717 15:33:03.135289   82782 status.go:330] multinode-748000-m03 host status = "Stopped" (err=<nil>)
	I0717 15:33:03.135311   82782 status.go:343] host is not running, skipping remaining checks
	I0717 15:33:03.135321   82782 status.go:257] multinode-748000-m03 status: &{Name:multinode-748000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.92s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (13.53s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:244: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-748000 node start m03 --alsologtostderr
multinode_test.go:254: (dbg) Done: out/minikube-darwin-amd64 -p multinode-748000 node start m03 --alsologtostderr: (12.50428934s)
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-748000 status
multinode_test.go:275: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (13.53s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (116.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-748000
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 stop -p multinode-748000
multinode_test.go:290: (dbg) Done: out/minikube-darwin-amd64 stop -p multinode-748000: (22.966263833s)
multinode_test.go:295: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-748000 --wait=true -v=8 --alsologtostderr
multinode_test.go:295: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-748000 --wait=true -v=8 --alsologtostderr: (1m33.90081351s)
multinode_test.go:300: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-748000
--- PASS: TestMultiNode/serial/RestartKeepsNodes (116.96s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-748000 node delete m03
multinode_test.go:394: (dbg) Done: out/minikube-darwin-amd64 -p multinode-748000 node delete m03: (5.0550762s)
multinode_test.go:400: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-748000 status --alsologtostderr
multinode_test.go:414: (dbg) Run:  docker volume ls
multinode_test.go:424: (dbg) Run:  kubectl get nodes
multinode_test.go:432: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.88s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (21.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-748000 stop
E0717 15:35:33.895922   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/addons-230000/client.crt: no such file or directory
multinode_test.go:314: (dbg) Done: out/minikube-darwin-amd64 -p multinode-748000 stop: (21.523598122s)
multinode_test.go:320: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-748000 status
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-748000 status: exit status 7 (143.517227ms)

                                                
                                                
-- stdout --
	multinode-748000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-748000-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:327: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-748000 status --alsologtostderr
multinode_test.go:327: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-748000 status --alsologtostderr: exit status 7 (146.053673ms)

                                                
                                                
-- stdout --
	multinode-748000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-748000-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0717 15:35:41.220157   83217 out.go:296] Setting OutFile to fd 1 ...
	I0717 15:35:41.220347   83217 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 15:35:41.220352   83217 out.go:309] Setting ErrFile to fd 2...
	I0717 15:35:41.220356   83217 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0717 15:35:41.220540   83217 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/16899-76867/.minikube/bin
	I0717 15:35:41.220710   83217 out.go:303] Setting JSON to false
	I0717 15:35:41.220731   83217 mustload.go:65] Loading cluster: multinode-748000
	I0717 15:35:41.220784   83217 notify.go:220] Checking for updates...
	I0717 15:35:41.221033   83217 config.go:182] Loaded profile config "multinode-748000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0717 15:35:41.221045   83217 status.go:255] checking status of multinode-748000 ...
	I0717 15:35:41.221422   83217 cli_runner.go:164] Run: docker container inspect multinode-748000 --format={{.State.Status}}
	I0717 15:35:41.272287   83217 status.go:330] multinode-748000 host status = "Stopped" (err=<nil>)
	I0717 15:35:41.272306   83217 status.go:343] host is not running, skipping remaining checks
	I0717 15:35:41.272313   83217 status.go:257] multinode-748000 status: &{Name:multinode-748000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0717 15:35:41.272349   83217 status.go:255] checking status of multinode-748000-m02 ...
	I0717 15:35:41.272634   83217 cli_runner.go:164] Run: docker container inspect multinode-748000-m02 --format={{.State.Status}}
	I0717 15:35:41.323779   83217 status.go:330] multinode-748000-m02 host status = "Stopped" (err=<nil>)
	I0717 15:35:41.323812   83217 status.go:343] host is not running, skipping remaining checks
	I0717 15:35:41.323821   83217 status.go:257] multinode-748000-m02 status: &{Name:multinode-748000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (21.81s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (60.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:344: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:354: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-748000 --wait=true -v=8 --alsologtostderr --driver=docker 
E0717 15:35:45.775142   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/functional-554000/client.crt: no such file or directory
multinode_test.go:354: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-748000 --wait=true -v=8 --alsologtostderr --driver=docker : (59.818218664s)
multinode_test.go:360: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-748000 status --alsologtostderr
multinode_test.go:374: (dbg) Run:  kubectl get nodes
multinode_test.go:382: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (60.67s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (26.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:443: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-748000
multinode_test.go:452: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-748000-m02 --driver=docker 
multinode_test.go:452: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-748000-m02 --driver=docker : exit status 14 (408.065137ms)

                                                
                                                
-- stdout --
	* [multinode-748000-m02] minikube v1.31.0 on Darwin 13.4.1
	  - MINIKUBE_LOCATION=16899
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16899-76867/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16899-76867/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-748000-m02' is duplicated with machine name 'multinode-748000-m02' in profile 'multinode-748000'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:460: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-748000-m03 --driver=docker 
multinode_test.go:460: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-748000-m03 --driver=docker : (23.04133429s)
multinode_test.go:467: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-748000
multinode_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-amd64 node add -p multinode-748000: exit status 80 (459.125183ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-748000
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-748000-m03 already exists in multinode-748000-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-amd64 delete -p multinode-748000-m03
multinode_test.go:472: (dbg) Done: out/minikube-darwin-amd64 delete -p multinode-748000-m03: (2.459984604s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (26.41s)

                                                
                                    
x
+
TestPreload (169.57s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-787000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-787000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.24.4: (1m23.36389321s)
preload_test.go:52: (dbg) Run:  out/minikube-darwin-amd64 -p test-preload-787000 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-darwin-amd64 -p test-preload-787000 image pull gcr.io/k8s-minikube/busybox: (2.425146685s)
preload_test.go:58: (dbg) Run:  out/minikube-darwin-amd64 stop -p test-preload-787000
preload_test.go:58: (dbg) Done: out/minikube-darwin-amd64 stop -p test-preload-787000: (10.803757334s)
preload_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-787000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker 
preload_test.go:66: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-787000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker : (1m10.143757447s)
preload_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 -p test-preload-787000 image list
helpers_test.go:175: Cleaning up "test-preload-787000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p test-preload-787000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p test-preload-787000: (2.548555729s)
--- PASS: TestPreload (169.57s)

                                                
                                    
x
+
TestScheduledStopUnix (95.84s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 start -p scheduled-stop-560000 --memory=2048 --driver=docker 
scheduled_stop_test.go:128: (dbg) Done: out/minikube-darwin-amd64 start -p scheduled-stop-560000 --memory=2048 --driver=docker : (21.842282485s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-560000 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.TimeToStop}} -p scheduled-stop-560000 -n scheduled-stop-560000
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-560000 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-560000 --cancel-scheduled
E0717 15:40:33.898480   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/addons-230000/client.crt: no such file or directory
E0717 15:40:45.779158   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/functional-554000/client.crt: no such file or directory
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-560000 -n scheduled-stop-560000
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 status -p scheduled-stop-560000
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-560000 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 status -p scheduled-stop-560000
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p scheduled-stop-560000: exit status 7 (153.908799ms)

                                                
                                                
-- stdout --
	scheduled-stop-560000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-560000 -n scheduled-stop-560000
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-560000 -n scheduled-stop-560000: exit status 7 (93.279163ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-560000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p scheduled-stop-560000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p scheduled-stop-560000: (2.180340032s)
--- PASS: TestScheduledStopUnix (95.84s)

                                                
                                    
x
+
TestSkaffold (120.44s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/skaffold.exe2385150925 version
skaffold_test.go:63: skaffold version: v2.6.0
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p skaffold-258000 --memory=2600 --driver=docker 
skaffold_test.go:66: (dbg) Done: out/minikube-darwin-amd64 start -p skaffold-258000 --memory=2600 --driver=docker : (21.929406666s)
skaffold_test.go:86: copying out/minikube-darwin-amd64 to /Users/jenkins/workspace/out/minikube
skaffold_test.go:105: (dbg) Run:  /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/skaffold.exe2385150925 run --minikube-profile skaffold-258000 --kube-context skaffold-258000 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:105: (dbg) Done: /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/skaffold.exe2385150925 run --minikube-profile skaffold-258000 --kube-context skaffold-258000 --status-check=true --port-forward=false --interactive=false: (1m22.192603021s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-d957dfc-pjjxp" [ed6194ef-d395-4dc4-8617-f4b3d10e2df1] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 5.013232688s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-65d8c484bb-p749t" [665dc46d-b36f-4b45-a784-55839143b824] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.008496279s
helpers_test.go:175: Cleaning up "skaffold-258000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p skaffold-258000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p skaffold-258000: (3.050948557s)
--- PASS: TestSkaffold (120.44s)

                                                
                                    
x
+
TestInsufficientStorage (10.81s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-darwin-amd64 start -p insufficient-storage-210000 --memory=2048 --output=json --wait=true --driver=docker 
status_test.go:50: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p insufficient-storage-210000 --memory=2048 --output=json --wait=true --driver=docker : exit status 26 (7.826224538s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"85776957-119e-4c93-988b-c441406b7fde","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-210000] minikube v1.31.0 on Darwin 13.4.1","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"bd7671ef-d8ef-45c4-bed4-c8ed8ffa5462","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=16899"}}
	{"specversion":"1.0","id":"cddb32e7-70fc-448c-8f30-d4755d0494ef","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/16899-76867/kubeconfig"}}
	{"specversion":"1.0","id":"9e78807a-27d3-4b0a-9698-661d8d33f0cb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"121b3ced-3eb5-4471-9581-d32e64725011","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"b1342e24-0346-4ee3-84f8-7d350ca6871f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/16899-76867/.minikube"}}
	{"specversion":"1.0","id":"26c86eb9-449b-481f-87df-e64c78b952ed","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"40f474d9-075e-43fc-9438-511362678422","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"2112a1e8-5c37-41b3-b416-c5df0b637fad","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"aa135d85-5b09-49c4-b76d-133951d89b95","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"9a77425f-c97f-4784-abc6-95811ddb76ba","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker Desktop driver with root privileges"}}
	{"specversion":"1.0","id":"77878906-f944-4f08-8e84-060efae77f43","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-210000 in cluster insufficient-storage-210000","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"1f0708f2-a059-4b08-97e2-0b800aae8bc3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"5ea4c074-b353-482b-a116-703892335b5b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"dfccb4f3-bb9e-4839-b561-808349a653b2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p insufficient-storage-210000 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p insufficient-storage-210000 --output=json --layout=cluster: exit status 7 (364.941332ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-210000","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.31.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-210000","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 15:43:46.919288   84695 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-210000" does not appear in /Users/jenkins/minikube-integration/16899-76867/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p insufficient-storage-210000 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p insufficient-storage-210000 --output=json --layout=cluster: exit status 7 (365.42382ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-210000","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.31.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-210000","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0717 15:43:47.285733   84705 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-210000" does not appear in /Users/jenkins/minikube-integration/16899-76867/kubeconfig
	E0717 15:43:47.295854   84705 status.go:559] unable to read event log: stat: stat /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/insufficient-storage-210000/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-210000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p insufficient-storage-210000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p insufficient-storage-210000: (2.251712785s)
--- PASS: TestInsufficientStorage (10.81s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (10.96s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.31.0 on darwin
- MINIKUBE_LOCATION=16899
- KUBECONFIG=/Users/jenkins/minikube-integration/16899-76867/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1027469238/001
* Using the hyperkit driver based on user configuration
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1027469238/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1027469238/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/52/zh_qmlrn1f36yr6lgs7nxtym0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1027469238/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Downloading VM boot image ...
* Starting control plane node minikube in cluster minikube
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (10.96s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.54s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.54s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (3.47s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:218: (dbg) Run:  out/minikube-darwin-amd64 logs -p stopped-upgrade-938000
version_upgrade_test.go:218: (dbg) Done: out/minikube-darwin-amd64 logs -p stopped-upgrade-938000: (3.472918237s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (3.47s)

                                                
                                    
x
+
TestPause/serial/Start (50.61s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-527000 --memory=2048 --install-addons=false --wait=all --driver=docker 
E0717 15:50:33.874957   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/addons-230000/client.crt: no such file or directory
E0717 15:50:45.755566   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/functional-554000/client.crt: no such file or directory
pause_test.go:80: (dbg) Done: out/minikube-darwin-amd64 start -p pause-527000 --memory=2048 --install-addons=false --wait=all --driver=docker : (50.606906676s)
--- PASS: TestPause/serial/Start (50.61s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (41.45s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-527000 --alsologtostderr -v=1 --driver=docker 
E0717 15:51:09.524045   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/skaffold-258000/client.crt: no such file or directory
pause_test.go:92: (dbg) Done: out/minikube-darwin-amd64 start -p pause-527000 --alsologtostderr -v=1 --driver=docker : (41.436675853s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (41.45s)

                                                
                                    
x
+
TestPause/serial/Pause (0.7s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 pause -p pause-527000 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.70s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.42s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p pause-527000 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p pause-527000 --output=json --layout=cluster: exit status 2 (416.528013ms)

                                                
                                                
-- stdout --
	{"Name":"pause-527000","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 14 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.31.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-527000","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.42s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.68s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-darwin-amd64 unpause -p pause-527000 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.68s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.81s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 pause -p pause-527000 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.81s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.5s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 delete -p pause-527000 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-darwin-amd64 delete -p pause-527000 --alsologtostderr -v=5: (2.500252522s)
--- PASS: TestPause/serial/DeletePaused (2.50s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.52s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-527000
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-527000: exit status 1 (50.827108ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-527000: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.52s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.42s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-716000 --no-kubernetes --kubernetes-version=1.20 --driver=docker 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p NoKubernetes-716000 --no-kubernetes --kubernetes-version=1.20 --driver=docker : exit status 14 (424.647361ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-716000] minikube v1.31.0 on Darwin 13.4.1
	  - MINIKUBE_LOCATION=16899
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16899-76867/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16899-76867/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.42s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (24.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-716000 --driver=docker 
no_kubernetes_test.go:95: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-716000 --driver=docker : (23.95747121s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-716000 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (24.37s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (9.74s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-716000 --no-kubernetes --driver=docker 
no_kubernetes_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-716000 --no-kubernetes --driver=docker : (6.822344573s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-716000 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p NoKubernetes-716000 status -o json: exit status 2 (453.623984ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-716000","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-darwin-amd64 delete -p NoKubernetes-716000
no_kubernetes_test.go:124: (dbg) Done: out/minikube-darwin-amd64 delete -p NoKubernetes-716000: (2.467338922s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (9.74s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (8.57s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-716000 --no-kubernetes --driver=docker 
no_kubernetes_test.go:136: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-716000 --no-kubernetes --driver=docker : (8.571338727s)
--- PASS: TestNoKubernetes/serial/Start (8.57s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.35s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-716000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-716000 "sudo systemctl is-active --quiet service kubelet": exit status 1 (351.647808ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.35s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (34.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-darwin-amd64 profile list: (16.272281341s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-darwin-amd64 profile list --output=json: (18.06670287s)
--- PASS: TestNoKubernetes/serial/ProfileList (34.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.53s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-amd64 stop -p NoKubernetes-716000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-amd64 stop -p NoKubernetes-716000: (1.530162013s)
--- PASS: TestNoKubernetes/serial/Stop (1.53s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (8.01s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-716000 --driver=docker 
no_kubernetes_test.go:191: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-716000 --driver=docker : (8.007342624s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (8.01s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-716000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-716000 "sudo systemctl is-active --quiet service kubelet": exit status 1 (342.240189ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (51.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p auto-679000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker 
E0717 15:53:25.639847   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/skaffold-258000/client.crt: no such file or directory
E0717 15:53:48.814196   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/functional-554000/client.crt: no such file or directory
E0717 15:53:53.365503   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/skaffold-258000/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p auto-679000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker : (51.125724159s)
--- PASS: TestNetworkPlugins/group/auto/Start (51.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p auto-679000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (12.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-679000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-b8ptm" [c458f114-bd5d-486b-879e-2a707d26764b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-b8ptm" [c458f114-bd5d-486b-879e-2a707d26764b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 12.007763481s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (12.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-679000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-679000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-679000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (51.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p flannel-679000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p flannel-679000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker : (51.872889288s)
--- PASS: TestNetworkPlugins/group/flannel/Start (51.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-srpml" [12263667-0aee-4242-b1da-bb521f0ef62d] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.019073408s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p flannel-679000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (12.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-679000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-6k954" [d1e246b9-1081-4b1e-9f75-4e08224ddf0a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0717 15:55:33.876174   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/addons-230000/client.crt: no such file or directory
helpers_test.go:344: "netcat-7458db8b8-6k954" [d1e246b9-1081-4b1e-9f75-4e08224ddf0a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 12.008499981s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (12.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-679000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-679000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-679000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (38.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p enable-default-cni-679000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p enable-default-cni-679000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker : (38.831733369s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (38.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p enable-default-cni-679000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-679000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-cqv7k" [9e64f8b7-5647-41d4-a843-5425961a978f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-cqv7k" [9e64f8b7-5647-41d4-a843-5425961a978f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.00754861s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-679000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-679000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-679000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (51.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p kindnet-679000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p kindnet-679000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker : (51.88378496s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (51.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (37.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p bridge-679000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p bridge-679000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker : (37.636741308s)
--- PASS: TestNetworkPlugins/group/bridge/Start (37.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-pqxzm" [2eab8c31-6497-43ec-8089-08bc51ff6dbd] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.017523269s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p bridge-679000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-679000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-6sdvr" [6e3456c4-5cff-4eb5-b297-3eeeca7abd12] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-6sdvr" [6e3456c4-5cff-4eb5-b297-3eeeca7abd12] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.008566643s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p kindnet-679000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-679000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-pwsdp" [09656840-ad16-416f-ba72-bb124bddffb3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-pwsdp" [09656840-ad16-416f-ba72-bb124bddffb3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.008476262s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-679000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-679000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-679000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-679000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-679000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-679000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (49.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p kubenet-679000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p kubenet-679000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker : (49.330977958s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (49.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (51.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p custom-flannel-679000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker 
E0717 15:59:00.848308   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/auto-679000/client.crt: no such file or directory
E0717 15:59:00.854672   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/auto-679000/client.crt: no such file or directory
E0717 15:59:00.865534   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/auto-679000/client.crt: no such file or directory
E0717 15:59:00.885802   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/auto-679000/client.crt: no such file or directory
E0717 15:59:00.927060   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/auto-679000/client.crt: no such file or directory
E0717 15:59:01.008768   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/auto-679000/client.crt: no such file or directory
E0717 15:59:01.181032   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/auto-679000/client.crt: no such file or directory
E0717 15:59:01.501460   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/auto-679000/client.crt: no such file or directory
E0717 15:59:02.142035   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/auto-679000/client.crt: no such file or directory
E0717 15:59:03.422950   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/auto-679000/client.crt: no such file or directory
E0717 15:59:05.983084   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/auto-679000/client.crt: no such file or directory
E0717 15:59:11.118581   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/auto-679000/client.crt: no such file or directory
E0717 15:59:21.359005   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/auto-679000/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p custom-flannel-679000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker : (51.269171628s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (51.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p kubenet-679000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (12.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-679000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-fcfcn" [9724ed45-c159-48e1-a465-c998f44dc6b1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-fcfcn" [9724ed45-c159-48e1-a465-c998f44dc6b1] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 12.006920781s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (12.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p custom-flannel-679000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-679000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-989xq" [97feecec-9b0c-4b8e-8f89-6626dedf3d71] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-989xq" [97feecec-9b0c-4b8e-8f89-6626dedf3d71] Running
E0717 15:59:41.840165   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/auto-679000/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.007148611s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-679000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-679000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-679000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-679000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-679000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-679000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (66.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p calico-679000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p calico-679000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker : (1m6.508117135s)
--- PASS: TestNetworkPlugins/group/calico/Start (66.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (38.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p false-679000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker 
E0717 16:00:22.801516   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/auto-679000/client.crt: no such file or directory
E0717 16:00:27.547874   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/flannel-679000/client.crt: no such file or directory
E0717 16:00:27.553246   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/flannel-679000/client.crt: no such file or directory
E0717 16:00:27.563554   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/flannel-679000/client.crt: no such file or directory
E0717 16:00:27.583722   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/flannel-679000/client.crt: no such file or directory
E0717 16:00:27.625986   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/flannel-679000/client.crt: no such file or directory
E0717 16:00:27.706266   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/flannel-679000/client.crt: no such file or directory
E0717 16:00:27.867287   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/flannel-679000/client.crt: no such file or directory
E0717 16:00:28.188068   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/flannel-679000/client.crt: no such file or directory
E0717 16:00:28.830436   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/flannel-679000/client.crt: no such file or directory
E0717 16:00:30.113098   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/flannel-679000/client.crt: no such file or directory
E0717 16:00:32.677913   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/flannel-679000/client.crt: no such file or directory
E0717 16:00:33.892407   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/addons-230000/client.crt: no such file or directory
E0717 16:00:37.805401   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/flannel-679000/client.crt: no such file or directory
E0717 16:00:45.785454   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/functional-554000/client.crt: no such file or directory
E0717 16:00:48.053726   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/flannel-679000/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p false-679000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker : (38.349462244s)
--- PASS: TestNetworkPlugins/group/false/Start (38.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p false-679000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (11.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-679000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-dm84d" [2adb4eb5-e99e-4a37-92b0-c36b671b2beb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-dm84d" [2adb4eb5-e99e-4a37-92b0-c36b671b2beb] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 11.007476998s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (11.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-679000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-679000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-679000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-gwlzg" [8e306dbd-72b4-489d-b331-99bf5c8ba77e] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.018871125s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p calico-679000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (13.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-679000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-7458db8b8-xxs56" [0503ff36-96f8-46c6-a94c-d4eaf8aace37] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-7458db8b8-xxs56" [0503ff36-96f8-46c6-a94c-d4eaf8aace37] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 13.009121099s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (13.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-679000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-679000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-679000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (68.57s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p no-preload-042000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.27.3
E0717 16:01:53.643770   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/enable-default-cni-679000/client.crt: no such file or directory
E0717 16:01:58.764058   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/enable-default-cni-679000/client.crt: no such file or directory
E0717 16:02:09.004351   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/enable-default-cni-679000/client.crt: no such file or directory
E0717 16:02:29.485966   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/enable-default-cni-679000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p no-preload-042000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.27.3: (1m8.567569196s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (68.57s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.34s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-042000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [4313d5cb-e652-4fd5-b8dd-236659f6f2f8] Pending
E0717 16:03:01.750388   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/kindnet-679000/client.crt: no such file or directory
E0717 16:03:01.755753   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/kindnet-679000/client.crt: no such file or directory
E0717 16:03:01.765882   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/kindnet-679000/client.crt: no such file or directory
E0717 16:03:01.786022   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/kindnet-679000/client.crt: no such file or directory
E0717 16:03:01.827458   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/kindnet-679000/client.crt: no such file or directory
E0717 16:03:01.909585   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/kindnet-679000/client.crt: no such file or directory
E0717 16:03:02.070600   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/kindnet-679000/client.crt: no such file or directory
helpers_test.go:344: "busybox" [4313d5cb-e652-4fd5-b8dd-236659f6f2f8] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0717 16:03:02.390911   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/kindnet-679000/client.crt: no such file or directory
E0717 16:03:02.894660   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/bridge-679000/client.crt: no such file or directory
E0717 16:03:02.900469   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/bridge-679000/client.crt: no such file or directory
E0717 16:03:02.911206   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/bridge-679000/client.crt: no such file or directory
E0717 16:03:02.931526   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/bridge-679000/client.crt: no such file or directory
E0717 16:03:02.972602   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/bridge-679000/client.crt: no such file or directory
E0717 16:03:03.031271   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/kindnet-679000/client.crt: no such file or directory
E0717 16:03:03.053554   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/bridge-679000/client.crt: no such file or directory
E0717 16:03:03.213642   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/bridge-679000/client.crt: no such file or directory
E0717 16:03:03.533830   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/bridge-679000/client.crt: no such file or directory
E0717 16:03:04.174185   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/bridge-679000/client.crt: no such file or directory
E0717 16:03:04.311702   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/kindnet-679000/client.crt: no such file or directory
helpers_test.go:344: "busybox" [4313d5cb-e652-4fd5-b8dd-236659f6f2f8] Running
E0717 16:03:05.456449   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/bridge-679000/client.crt: no such file or directory
E0717 16:03:06.873466   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/kindnet-679000/client.crt: no such file or directory
E0717 16:03:08.016617   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/bridge-679000/client.crt: no such file or directory
E0717 16:03:10.447793   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/enable-default-cni-679000/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.01630455s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-042000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.34s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p no-preload-042000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0717 16:03:11.425707   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/flannel-679000/client.crt: no such file or directory
E0717 16:03:11.993715   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/kindnet-679000/client.crt: no such file or directory
start_stop_delete_test.go:205: (dbg) Done: out/minikube-darwin-amd64 addons enable metrics-server -p no-preload-042000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.049756962s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-042000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (10.9s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p no-preload-042000 --alsologtostderr -v=3
E0717 16:03:13.137117   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/bridge-679000/client.crt: no such file or directory
E0717 16:03:22.234217   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/kindnet-679000/client.crt: no such file or directory
E0717 16:03:23.378380   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/bridge-679000/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p no-preload-042000 --alsologtostderr -v=3: (10.903959851s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (10.90s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.4s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-042000 -n no-preload-042000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-042000 -n no-preload-042000: exit status 7 (92.327615ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p no-preload-042000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.40s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (333.75s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p no-preload-042000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.27.3
E0717 16:03:25.678949   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/skaffold-258000/client.crt: no such file or directory
E0717 16:03:42.715026   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/kindnet-679000/client.crt: no such file or directory
E0717 16:03:43.859445   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/bridge-679000/client.crt: no such file or directory
E0717 16:04:00.887228   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/auto-679000/client.crt: no such file or directory
E0717 16:04:23.676594   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/kindnet-679000/client.crt: no such file or directory
E0717 16:04:24.820186   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/bridge-679000/client.crt: no such file or directory
E0717 16:04:27.161820   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/kubenet-679000/client.crt: no such file or directory
E0717 16:04:27.168242   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/kubenet-679000/client.crt: no such file or directory
E0717 16:04:27.180427   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/kubenet-679000/client.crt: no such file or directory
E0717 16:04:27.201876   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/kubenet-679000/client.crt: no such file or directory
E0717 16:04:27.241950   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/kubenet-679000/client.crt: no such file or directory
E0717 16:04:27.323156   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/kubenet-679000/client.crt: no such file or directory
E0717 16:04:27.483941   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/kubenet-679000/client.crt: no such file or directory
E0717 16:04:27.805027   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/kubenet-679000/client.crt: no such file or directory
E0717 16:04:28.445317   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/kubenet-679000/client.crt: no such file or directory
E0717 16:04:28.601514   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/auto-679000/client.crt: no such file or directory
E0717 16:04:29.726590   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/kubenet-679000/client.crt: no such file or directory
E0717 16:04:32.287636   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/kubenet-679000/client.crt: no such file or directory
E0717 16:04:32.371021   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/enable-default-cni-679000/client.crt: no such file or directory
E0717 16:04:35.007207   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/custom-flannel-679000/client.crt: no such file or directory
E0717 16:04:35.012849   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/custom-flannel-679000/client.crt: no such file or directory
E0717 16:04:35.023247   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/custom-flannel-679000/client.crt: no such file or directory
E0717 16:04:35.043921   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/custom-flannel-679000/client.crt: no such file or directory
E0717 16:04:35.084942   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/custom-flannel-679000/client.crt: no such file or directory
E0717 16:04:35.167108   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/custom-flannel-679000/client.crt: no such file or directory
E0717 16:04:35.327982   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/custom-flannel-679000/client.crt: no such file or directory
E0717 16:04:35.648909   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/custom-flannel-679000/client.crt: no such file or directory
E0717 16:04:36.289962   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/custom-flannel-679000/client.crt: no such file or directory
E0717 16:04:37.407912   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/kubenet-679000/client.crt: no such file or directory
E0717 16:04:37.571677   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/custom-flannel-679000/client.crt: no such file or directory
E0717 16:04:40.131930   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/custom-flannel-679000/client.crt: no such file or directory
E0717 16:04:45.254174   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/custom-flannel-679000/client.crt: no such file or directory
E0717 16:04:47.649560   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/kubenet-679000/client.crt: no such file or directory
E0717 16:04:48.767332   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/skaffold-258000/client.crt: no such file or directory
E0717 16:04:55.496531   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/custom-flannel-679000/client.crt: no such file or directory
E0717 16:05:08.130481   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/kubenet-679000/client.crt: no such file or directory
E0717 16:05:15.977064   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/custom-flannel-679000/client.crt: no such file or directory
E0717 16:05:16.978324   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/addons-230000/client.crt: no such file or directory
E0717 16:05:27.582929   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/flannel-679000/client.crt: no such file or directory
E0717 16:05:33.917073   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/addons-230000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p no-preload-042000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.27.3: (5m33.3503934s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-042000 -n no-preload-042000
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (333.75s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (1.54s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p old-k8s-version-770000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p old-k8s-version-770000 --alsologtostderr -v=3: (1.54174556s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (1.54s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.4s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-770000 -n old-k8s-version-770000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-770000 -n old-k8s-version-770000: exit status 7 (92.537659ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p old-k8s-version-770000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.40s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (22.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-qpk76" [3ad6472e-cb80-48cf-b939-6226ff931814] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0717 16:09:00.890514   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/auto-679000/client.crt: no such file or directory
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-qpk76" [3ad6472e-cb80-48cf-b939-6226ff931814] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 22.013887463s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (22.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-qpk76" [3ad6472e-cb80-48cf-b939-6226ff931814] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.008836753s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-042000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.42s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p no-preload-042000 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.42s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.17s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p no-preload-042000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-042000 -n no-preload-042000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-042000 -n no-preload-042000: exit status 2 (383.920357ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-042000 -n no-preload-042000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-042000 -n no-preload-042000: exit status 2 (387.750798ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p no-preload-042000 --alsologtostderr -v=1
E0717 16:09:27.165184   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/kubenet-679000/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-042000 -n no-preload-042000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-042000 -n no-preload-042000
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.17s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (51.86s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p embed-certs-306000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.27.3
E0717 16:09:35.010522   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/custom-flannel-679000/client.crt: no such file or directory
E0717 16:09:54.899349   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/kubenet-679000/client.crt: no such file or directory
E0717 16:10:02.765139   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/custom-flannel-679000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p embed-certs-306000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.27.3: (51.86097848s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (51.86s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.34s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-306000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [31aca309-82b8-4566-87b0-43e4acd7cb20] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0717 16:10:27.586855   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/flannel-679000/client.crt: no such file or directory
helpers_test.go:344: "busybox" [31aca309-82b8-4566-87b0-43e4acd7cb20] Running
E0717 16:10:28.861762   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/functional-554000/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.015939048s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-306000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.34s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p embed-certs-306000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0717 16:10:33.920985   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/addons-230000/client.crt: no such file or directory
start_stop_delete_test.go:205: (dbg) Done: out/minikube-darwin-amd64 addons enable metrics-server -p embed-certs-306000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.175201833s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-306000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (10.92s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p embed-certs-306000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p embed-certs-306000 --alsologtostderr -v=3: (10.915344235s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (10.92s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.42s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-306000 -n embed-certs-306000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-306000 -n embed-certs-306000: exit status 7 (101.45868ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p embed-certs-306000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.42s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (334.4s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p embed-certs-306000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.27.3
E0717 16:10:45.802719   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/functional-554000/client.crt: no such file or directory
E0717 16:10:50.456385   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/false-679000/client.crt: no such file or directory
E0717 16:11:08.769472   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/calico-679000/client.crt: no such file or directory
E0717 16:11:18.148503   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/false-679000/client.crt: no such file or directory
E0717 16:11:36.461149   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/calico-679000/client.crt: no such file or directory
E0717 16:11:48.525139   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/enable-default-cni-679000/client.crt: no such file or directory
E0717 16:13:01.255312   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/no-preload-042000/client.crt: no such file or directory
E0717 16:13:01.260587   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/no-preload-042000/client.crt: no such file or directory
E0717 16:13:01.270926   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/no-preload-042000/client.crt: no such file or directory
E0717 16:13:01.293045   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/no-preload-042000/client.crt: no such file or directory
E0717 16:13:01.333319   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/no-preload-042000/client.crt: no such file or directory
E0717 16:13:01.413813   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/no-preload-042000/client.crt: no such file or directory
E0717 16:13:01.574722   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/no-preload-042000/client.crt: no such file or directory
E0717 16:13:01.758758   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/kindnet-679000/client.crt: no such file or directory
E0717 16:13:01.894908   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/no-preload-042000/client.crt: no such file or directory
E0717 16:13:02.535138   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/no-preload-042000/client.crt: no such file or directory
E0717 16:13:02.902550   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/bridge-679000/client.crt: no such file or directory
E0717 16:13:03.816190   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/no-preload-042000/client.crt: no such file or directory
E0717 16:13:06.376899   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/no-preload-042000/client.crt: no such file or directory
E0717 16:13:11.498453   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/no-preload-042000/client.crt: no such file or directory
E0717 16:13:21.738753   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/no-preload-042000/client.crt: no such file or directory
E0717 16:13:25.687605   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/skaffold-258000/client.crt: no such file or directory
E0717 16:13:42.220609   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/no-preload-042000/client.crt: no such file or directory
E0717 16:14:00.894677   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/auto-679000/client.crt: no such file or directory
E0717 16:14:23.181466   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/no-preload-042000/client.crt: no such file or directory
E0717 16:14:27.168438   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/kubenet-679000/client.crt: no such file or directory
E0717 16:14:35.013996   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/custom-flannel-679000/client.crt: no such file or directory
E0717 16:15:23.969957   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/auto-679000/client.crt: no such file or directory
E0717 16:15:27.589868   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/flannel-679000/client.crt: no such file or directory
E0717 16:15:33.924191   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/addons-230000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p embed-certs-306000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.27.3: (5m33.998028146s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-306000 -n embed-certs-306000
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (334.40s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (21.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-5p78t" [197ef5bf-448a-454e-870e-c057615444c8] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-5p78t" [197ef5bf-448a-454e-870e-c057615444c8] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 21.014844509s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (21.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-5p78t" [197ef5bf-448a-454e-870e-c057615444c8] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.009990434s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-306000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.44s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p embed-certs-306000 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.44s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.16s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p embed-certs-306000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-306000 -n embed-certs-306000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-306000 -n embed-certs-306000: exit status 2 (392.502629ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-306000 -n embed-certs-306000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-306000 -n embed-certs-306000: exit status 2 (396.884414ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p embed-certs-306000 --alsologtostderr -v=1
E0717 16:16:48.528567   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/enable-default-cni-679000/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-306000 -n embed-certs-306000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-306000 -n embed-certs-306000
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.16s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (51.78s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p default-k8s-diff-port-651000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.27.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p default-k8s-diff-port-651000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.27.3: (51.784175445s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (51.78s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.35s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-651000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [5372b4c6-8c50-40de-a27c-aff27812c051] Pending
helpers_test.go:344: "busybox" [5372b4c6-8c50-40de-a27c-aff27812c051] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [5372b4c6-8c50-40de-a27c-aff27812c051] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.018495889s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-651000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.35s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p default-k8s-diff-port-651000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-darwin-amd64 addons enable metrics-server -p default-k8s-diff-port-651000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.151025841s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-651000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (10.85s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p default-k8s-diff-port-651000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p default-k8s-diff-port-651000 --alsologtostderr -v=3: (10.846907542s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (10.85s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.42s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-651000 -n default-k8s-diff-port-651000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-651000 -n default-k8s-diff-port-651000: exit status 7 (94.331127ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p default-k8s-diff-port-651000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.42s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (334.58s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p default-k8s-diff-port-651000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.27.3
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p default-k8s-diff-port-651000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.27.3: (5m34.169649642s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-651000 -n default-k8s-diff-port-651000
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (334.58s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (24.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-mbjz8" [70232c30-da69-4977-880b-a90a2d085e6d] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-mbjz8" [70232c30-da69-4977-880b-a90a2d085e6d] Running
E0717 16:24:00.936972   77324 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16899-76867/.minikube/profiles/auto-679000/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 24.013327377s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (24.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5c5cfc8747-mbjz8" [70232c30-da69-4977-880b-a90a2d085e6d] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.009567273s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-651000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.42s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p default-k8s-diff-port-651000 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.42s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.15s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p default-k8s-diff-port-651000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-651000 -n default-k8s-diff-port-651000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-651000 -n default-k8s-diff-port-651000: exit status 2 (395.735306ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-651000 -n default-k8s-diff-port-651000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-651000 -n default-k8s-diff-port-651000: exit status 2 (384.985221ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p default-k8s-diff-port-651000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-651000 -n default-k8s-diff-port-651000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-651000 -n default-k8s-diff-port-651000
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.15s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (34.43s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p newest-cni-958000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --kubernetes-version=v1.27.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p newest-cni-958000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --kubernetes-version=v1.27.3: (34.431776711s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (34.43s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.48s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p newest-cni-958000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-darwin-amd64 addons enable metrics-server -p newest-cni-958000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.48417989s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.48s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (11.49s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p newest-cni-958000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p newest-cni-958000 --alsologtostderr -v=3: (11.49417304s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (11.49s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.41s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-958000 -n newest-cni-958000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-958000 -n newest-cni-958000: exit status 7 (95.785923ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p newest-cni-958000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.41s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (29.15s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p newest-cni-958000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --kubernetes-version=v1.27.3
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p newest-cni-958000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --kubernetes-version=v1.27.3: (28.742715415s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-958000 -n newest-cni-958000
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (29.15s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.43s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p newest-cni-958000 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.43s)

                                                
                                    

Test skip (19/317)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.27.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.27.3/binaries (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Registry (18.46s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:306: registry stabilized in 14.787755ms
addons_test.go:308: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-xvx2n" [6211c056-1425-4317-b639-123403d409c2] Running
addons_test.go:308: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.013552131s
addons_test.go:311: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-9zgnx" [ad6d1be7-2db6-4df0-a450-4e23a7539c7b] Running
addons_test.go:311: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.044240967s
addons_test.go:316: (dbg) Run:  kubectl --context addons-230000 delete po -l run=registry-test --now
addons_test.go:321: (dbg) Run:  kubectl --context addons-230000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:321: (dbg) Done: kubectl --context addons-230000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (8.256342225s)
addons_test.go:331: Unable to complete rest of the test due to connectivity assumptions
--- SKIP: TestAddons/parallel/Registry (18.46s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (11.3s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:183: (dbg) Run:  kubectl --context addons-230000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:208: (dbg) Run:  kubectl --context addons-230000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:221: (dbg) Run:  kubectl --context addons-230000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:226: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [8d7ed99d-bd18-47fd-a42e-91b7fd2da651] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [8d7ed99d-bd18-47fd-a42e-91b7fd2da651] Running
addons_test.go:226: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.01194343s
addons_test.go:238: (dbg) Run:  out/minikube-darwin-amd64 -p addons-230000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:258: skipping ingress DNS test for any combination that needs port forwarding
--- SKIP: TestAddons/parallel/Ingress (11.30s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:474: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true darwin amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1628: (dbg) Run:  kubectl --context functional-554000 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-554000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-6fb669fc84-t2nw9" [5af8a254-fcb7-4f36-8ed9-6c39de8a2cba] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-6fb669fc84-t2nw9" [5af8a254-fcb7-4f36-8ed9-6c39de8a2cba] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.008036381s
functional_test.go:1645: test is broken for port-forwarded drivers: https://github.com/kubernetes/minikube/issues/7383
--- SKIP: TestFunctional/parallel/ServiceCmdConnect (8.13s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:522: 
----------------------- debugLogs start: cilium-679000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-679000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-679000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-679000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-679000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-679000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-679000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-679000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-679000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-679000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-679000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-679000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-679000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-679000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-679000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-679000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-679000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-679000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-679000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-679000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-679000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-679000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-679000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-679000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-679000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-679000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-679000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-679000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-679000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-679000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-679000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-679000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-679000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-679000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-679000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-679000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-679000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-679000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-679000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-679000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-679000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-679000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-679000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-679000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-679000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-679000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-679000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-679000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-679000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-679000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-679000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-679000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-679000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-679000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-679000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-679000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-679000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-679000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-679000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-679000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-679000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-679000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-679000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-679000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-679000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-679000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-679000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-679000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-679000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-679000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-679000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-679000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-679000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-679000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-679000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-679000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-679000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-679000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-679000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-679000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-679000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-679000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-679000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-679000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-679000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-679000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-679000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-679000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-679000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-679000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-679000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-679000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-679000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-679000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-679000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-679000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-679000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-679000"

                                                
                                                
----------------------- debugLogs end: cilium-679000 [took: 5.292339682s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-679000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cilium-679000
--- SKIP: TestNetworkPlugins/group/cilium (5.70s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.38s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-278000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p disable-driver-mounts-278000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.38s)

                                                
                                    
Copied to clipboard